Jump to content
XPEnology Community

snailium

Transition Member
  • Posts

    17
  • Joined

  • Last visited

  • Days Won

    1

snailium last won the day on September 15 2023

snailium had the most liked content!

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

snailium's Achievements

Newbie

Newbie (1/7)

8

Reputation

  1. Yes, there are several issues related to HDD problem after 23.9.0. https://github.com/wjz304/arpl-i18n/issues/199 https://github.com/wjz304/arpl-i18n/issues/201 https://github.com/wjz304/arpl-i18n/issues/206 The discussion is mainly on #199.
  2. Hi Peter, After several experiments, I managed to get rid of the corrupt disks. The problem is due to mpt3sas driver update from ARPL-i18n 23.9.0. Current working set - ARPL-i18n 23.9.1 - LKMs 23.9.1 - addon 23.9.1 - modules 23.8.5 People with corrupt disk problem can try revert modules to 23.8.5 through update menu -> Local upload. https://github.com/wjz304/arpl-modules/releases/tag/23.8.5
  3. Thanks Peter, I'll try using legacy BIOS with HDD detection. Currently on my baremetal, I'm using BIOS mode but set controller to off, so it bypasses the HDD detection and saves me some boot time. But this problem also exists on Proxmox VM, which I cannot switch it to legacy BIOS mode. Probably we need to find some other solution.
  4. Hi Peter, Thanks for this serial update. I've been waiting for this feature for a long time!!! But it seems this feature introduces several problems. (It might be this feature or ARPL, I'm not sure, I've created an issue #201 in arpl-i18n for these problems.) First is changing HDD serial number does crash existing storage pool. WE NEED PUT CAUTION MESSAGE IN LOADER CHANGELOG. The storage which contains HDD on SAS HBA will show as "missing" after LKMS update. The solution as following. 1. Eject all HDDs related to a "missing" pool 2. Remove storage pool from Storage Manager 3. Attach back those HDDs 4. Wait until DSM identifies all HDD, and it should give an "recoverable" pool 5. Do a "repair" or "reconstruct" on that pool. Then, there is more serious problem. It seems the loader doesn't pass HDD information until DSM boots up. So, as soon as we can login the web portal, the HDDs are not there. They will pop up one-by-one. Before those HDD are taken over by DSM, the storage pool shows "missing". In another word, DSM boots "too fast" even before those HDDs are handed over to DSM. Can you please look into those problems? Thanks a lot.
  5. I never succeeded passthrough my GTX1650, neither ESXi or PVE. After you mentioned resetting BIOS setting, I think it might be "Above 4G decoding" causing the issue. On some motherboard (e.g. Asus), this setting is in the same menu as "SR-IOV support". We all thought enabling "Above 4G decoding" should boost the performance of video card. But it also introduce some limitation (or compatibility issue). One known issue is SLi/Crossfire/Multi-GPU won't work if "Above 4G decoding" is enabled. I'm now running with baremetal setup and very satisfied. So I'm not going to do any experiment. Anyone using VM and having passthough issue can try to DISABLE "Above 4G decoding" and see if it solves passthough issue.
  6. You need to set the VM network controller to "e1000e", not "VMXNET 3". Also, you can add a serial port to your VM, and let it output to a file. Check the output file after you see Jun's message. Probably it will give you some hint.
  7. I just had the same mystery disk missing problem. Google search landed here. Here is my situation. I do have 8 x 2TB SAS drives + 2 x SATA drives. 9 of them form an SHR2 pool and 1 of them is used as hot spare. HBA is LSI 9207-8i (FW 20.00.07.00), running on ESXi with PCIe passthrough. Between HBA and HDD, I used an IBM SAS expander to make HBA accept more than 8 drives. After a power loss, 2 x SAS drives are missing. I thought it might be disk failure. Since I have one hot spare, so I added another 2TB SATA drive to fix the pool. After a few days, I got another 2 x SAS drives missing. Fortunately, I have SHR2 so I didn't have data loss. The pool is still usable. After several reboot, all 4 x SAS drive came back with error message like "cannot access system partition". Then, another reboot make them disappear again. Then, I tried to remove the IBM SAS expander from the system, and use an LSI 9211-8i (FW 20.00.07.00). I connect 8 x SAS drives to LSI 9207-8i, and 3 x SATA drives to LSI 9211-8i. First, I did setup PCIe passthrough of both HBAs. Same problem. Those 4 missing drives come up and down randomly. Then, I tried RDM without PCIe passthrough (let ESXi manage the HDDs, and passthrough each individual HDD). When RDM through a virtual SCSI controller (virtual LSI SAS), same issue. But, as soon as I change the setting to make RDM through a virtual SATA controller, problem is resolved. All 4 missing drives are back. They are running stable, no matter how many times I reboot. If I remember correct, @IG-88 mentioned in his extension thread that DS918+ cannot handle mpt2sas very well. Both official driver and @IG-88 compiled driver have some weird issues. So, I'm guessing maybe the problem is caused by the mpt2sas driver in DSM. For me, the solution is to let DSM think they are all SATA HDDs (disable passthrough entire HBA, use RDM instead, with a virtual SATA controller). An additional FYI (maybe off-topic), I found the lsi_msgpt2 driver in ESXi 6.7u3 has some performance issue. The disk access speed was good for only 5 seconds, after that, I got ~20MB/s read/write. The solution is disable lsi_msgpt2 and use mpt2sas (need to download 20.00.00.00 version from VMware website).
  8. 2021-08-17T08:28:35.374Z| vmx| | I005: USB: Found device [name:synoboot.vmdk vid:0e0f pid:0005 speed:super family:storage virtPath:usb_xhci:3 deviceType:virtual-storage info:0000001 version:3], connected to usb_xhci port 3. The vid and pid for virtual mass storage don't change in ESXi 7.0. From the previous serial.log, there is no OS can be selected to boot. I'm not sure where the problem is. But it worth a try to change VM compatibility (from 7.0 to 6.7).
  9. It seems grub (bootloader) didn't find any valid boot entry. So, DSM is not loaded. Can you please also share "vmware.log"?
  10. Can you share the "serial.out" file under your VM directory?
  11. I would recommend adding a serial port in the VM options, dump the output to a log file. After booting your VM, keep checking the serial log file for hints (you can access the log file from datastore). It may tell you which driver has issue. P.S. double check "secure boot" option, it needs to remain disabled.
  12. Which network adapter are you using? You should use e1000e, not vmxnet3, because DSM doesn't have driver for vmxnet3.
  13. It is not the boot option. I mean if you use BIOS mode, there is no support on booting from USB drive. At least, I didn't have it work that way. Maybe you can find some other solution to make the VM boot from USB using BIOS mode.
  14. It sounds the network driver problem. Did you choose "e1000e" for the virtual network adapter? Or maybe try IG-88's extension for additional network driver?
  15. The ".bin" file was a typo, but it seems I cannot edit the post. Here the ".bin" means synoboot.img you just patched in Step 1a. I don't think ESXi VM can boot from virtual USB storage in BIOS mode. It has to be in EFI mode. Also, there is no CSM in ESXi's EFI mode. I haven't tried v1.03b loader, but I don't think it will work.
×
×
  • Create New...