bucu Posted January 3, 2021 Share #1 Posted January 3, 2021 After wracking my brain for the New Years weekend and scouring both Xpe and Proxmox forums, I finally got IOMMU pass-through working for my LSI card. (Pay attention to the little details guys!!! The code goes on one line, one line!!!! It isn't delineated line by line. 😩 /rant) Prior to the PT fix, Proxmox was showing all of the 7 drives installed on the LSI PCI card. After pass-through obviously not as the Xpenology VM is accessing. However, upon logging into DSM I'm seeing some weird behavior and I don't even know where to begin so maybe someone has seen this before and can point me in the right direction. [Just as a side note, yes only 7 drives even though hard drive caddy and PCI card can support 8.] Spoiler As you can see in the picture, drives are listed from 7-16. I am running two ssd's zfsmirror as the boot for Proxmox and image loader for VMs. I have 7 drives of 8 installed on the LSI 9211-8i PCI card. I see 4 of those drives as Drive 13-16. Drive 7 and 8 are the VM sata drives for the boot and loader information. Missing 3 on the other LSI SAS plug [assuming the three missing are all on the second SAS/SATA plug as it makes sense and it is port #2 on the card]. My guess is there is a max capacity of 16 drives in the DSM software. The mobo has a 6sata chipset (+2 NVME PCI/SATA unused), the two boot sata devices [drive 7 and 8] are technically virtual.6 from the physical sata ports from chipset, +2 virtual sata for boot, +4 [of 8] from the LSI = the 16 spots listed.Is my train of thought on the right track? If so, my next thought then is, how do we block the empty [non-used] sata ports from the Chipset from taking up wasted space on the Xpe-VM? Like I said, I'm stuck. I need a helpful push in the right direction. Spoiler Basic specs for info: Proxmox VM running: 1.04 loader with no Extra.lzma (i haven't figured out how to configure the lzma using proxmox without using a USB yet) Boot from virtual sata drive [qemu drive] partitioned on the Proxmox SSD drive as part of the VM DSM 6.2.3-25426 Hardware: i7-9700, Asrock b365 Pro4, 32gb ram, Intel x710 PCI NIC, LSI 9211-8i PCI HBA, 2x SATA SSD ZFS-Mirror Drives on LSI Card: 7x 14TB Seagate EXOS Space below left for future editing of OP for any requested information. Quote Link to comment Share on other sites More sharing options...
IG-88 Posted January 3, 2021 Share #2 Posted January 3, 2021 2 hours ago, bucu said: Missing 3 on the other LSI SAS plug [assuming the three missing are all on the second SAS/SATA plug as it makes sense and it is port #2 on the card]. not really, you dont see drives above 16, the lsi sas driver does not use a fixed position as you might expect and know from d sata/ahci controller, if you put one drive on the 1st sas port and 4 drives on the 2nd you would see 5 drivers in a row, no gaps they change the linux designation like sdg, sdh, sdi, ... if the drive that had sdh goes missing, on next boot another drive will get sdh and dsm will have that in its gui when using disk positions in a shelf its not "stable" (at least when rebooting), you need to keep track by serial number of disks and check that before pulling a drive 2 hours ago, bucu said: Proxmox VM running: 1.04 loader with no Extra.lzma (i haven't figured out how to configure the lzma using proxmox without using a USB yet) no, you are using the one that came with the loader, the extra.lzma is more then just drivers, it also contains a patch that takes care of things that need to changed from dsm's default configuration, so you dont use a extended extra.lzma a way how to handle the boot image (*.img) is described in the "normal" tutorial (Win32DiskImager 1.0 for windows) https://xpenology.com/forum/topic/7973-tutorial-installmigrate-dsm-52-to-61x-juns-loader/ but there is also a way lined out for linux https://xpenology.com/forum/topic/25833-tutorial-use-linux-to-create-bootable-xpenology-usb/ 2 hours ago, bucu said: My guess is there is a max capacity of 16 drives in the DSM software. no, the original "dsm limit" of the 918+ is 4 drives, the 16 comes through the patch in the extra/extra2 lzma and is checked on every boot (in case a dsm update replaces the patched file) you can manually change to 20 without much problems (when you read how to do it properly, there is also a nice youtube video about that) but you would loos that change when installing a bigger dsm update (like 6.2.2 .> 6.2.3, usually a *.pat file that has a new disk image of the system partition, they are ~200-300MB as *.pat file and have a hda1.tdz file in it) and with it 3-4 disks are gone resulting in a broken raid, when restoring the change again manually and rebooting your drives would be back and the raid should be working again but you usually dont break a raid on purpose (depending on the raid level and missing drives you could also loose the redundancy disks and your raid starts, in that case you would need to rebuild redundancy disks and you would be without it for 12-24h) 2 hours ago, bucu said: 6 from the physical sata ports from chipset, +2 virtual sata for boot, +4 [of 8] from the LSI = the 16 spots listed. yes close to it, but dsm is in a vm, so why is there any other "unwnated" devices, its about your vm configuration, there should only be things you define for your vm you can check /var/log/dmesg about controllers and there number of ports and try to map that back to your vm configuration to find out whats wrong if you define a 6 port controller in the vm then dsm will block 6 slots aka numbers of possible drives in esxi the boot drive gets its own controller so it does get separated, leaving the 2nd controller for system/data disks so check your definition file for the vm and compare that with the tutorial section i never used proxmox but you problem is kind of generic and based on how dsm works and counts drives 2 hours ago, bucu said: Space below left for future editing of OP for any requested information. that's not going to work here, you only have a short time to edit a post, after that you need to add a new one Quote Link to comment Share on other sites More sharing options...
bucu Posted January 4, 2021 Author Share #3 Posted January 4, 2021 Thanks for the reply IG88. I will re-read in depth later today if I have time. I have some work related projects I need to finish that take priority first. Quote if the drive that had sdh goes missing, on next boot another drive will get sdh and dsm will have that in its gui when using disk positions in a shelf its not "stable" (at least when rebooting), you need to keep track by serial number of disks and check that before pulling a drive That's good to know. I assumed it would orientate them in an ordered way. Quote so you dont use a extended extra.lzma a way how to handle the boot image (*.img) is described in the "normal" tutorial (Win32DiskImager 1.0 for windows) https://xpenology.com/forum/topic/7973-tutorial-installmigrate-dsm-52-to-61x-juns-loader/ I figured there must be some parallels, but I missed the section about loading that to a virtual partition without the normal operation of OSFmount that you would use on the USB. I'll read it again later. Quote its about your vm configuration, there should only be things you define for your vm you can check /var/log/dmesg about controllers and there number of ports and try to map that back to your vm configuration to find out whats wrong if you define a 6 port controller in the vm then dsm will block 6 slots aka numbers of possible drives in esxi the boot drive gets its own controller so it does get separated, leaving the 2nd controller for system/data disks so check your definition file for the vm and compare that with the tutorial section i never used proxmox but you problem is kind of generic and based on how dsm works and counts drives Yeah, that makes sense. I just don't remember specifying those in the configuration. I will check again. Then I will check it a second time. HAHA Quote Link to comment Share on other sites More sharing options...
bucu Posted January 9, 2021 Author Share #4 Posted January 9, 2021 I think I found the error looking through the dmesg log. These appear to be the default commands that were probably loaded with the grub file embedded with the synoboot.img loader. Quote [ 0.000000] Command line: syno_hdd_powerup_seq=1 SataPortMap=333 HddHotplug=0 It seems to make sense that it is passing through all these sata devices I'm not using because all my drives are on the LSI controller. How do I modify that startup config to run a SataPortMap=111 or SataPortMap=222 so that I can make the space available for that one remaining drive without modifying the same code to increase the max drives as that is unnecessary IMO. I have only done baremetal xpenology stuff before and that was about 6 years ago back on dsm5. My real syno box has served me just fine until now. This time I want to use it as a VM to learn more virtualization stuff but man being a newb at something feels like being blind and fumbling around in the dark. I can be happy with 6 drives if that is where I am. However, if I can get the 7th that'd be swell because then I can set up the hot spare to kick in if a drive failure occurs. Spoiler Spoiler Quote Link to comment Share on other sites More sharing options...
r27 Posted January 13, 2021 Share #5 Posted January 13, 2021 Can some share steps to pass LSI controller to xpenology running on Proxmox ? I did everything possible and impossible, using multiple different loaders etc. I can pass controller to ubuntu vm and see drives but xpenology VM doesn't see drives. Quote Link to comment Share on other sites More sharing options...
bucu Posted January 13, 2021 Author Share #6 Posted January 13, 2021 (edited) 8 hours ago, r27 said: Can some share steps to pass LSI controller to xpenology running on Proxmox ? I did everything possible and impossible, using multiple different loaders etc. I can pass controller to ubuntu vm and see drives but xpenology VM doesn't see drives. Sure. "lspci" and "lspci -k" are two commands you will need to check your LSI card. You want to see if it is in a separate IOMMU group from other devices and your chipset. For me it was already in a separate group. There is a guide on what do to if it isn't in a separate group somewhere on the forums here. You can search that. You need to make sure your hardware is virtualization compatible as well. You want to make sure you have IOMMU enabled in your grub file on Proxmox. You can refer to the guide here: Xpenology on Proxmox Install Guide Another post that might be illuminating: PCI Passthrough IOMMU (proxmox forum) After those steps are taken. You need to update the system files with the changes made to the grub file. Those steps are also listed in the guide. Then modify your VM to passthrough the PCI device with the appropriate hardware ID that you saw by using "lspci" command. I personally restarted proxmox server before starting the VM. I am not sure if that step is necessary but it was what I did to make sure the changes were initialized. I'm a beginner at proxmox so if someone more knowledgeable than me can correct or add to this. Feel free! You can also check your dmesg on synology VM to see if there are any errors after completing the guide's steps. (possible errors can be driver conflict, driver not loading, card is not in IT mode (HBA)) Hope that helps R27. Edited January 13, 2021 by bucu Add additional information Quote Link to comment Share on other sites More sharing options...
r27 Posted January 13, 2021 Share #7 Posted January 13, 2021 19 minutes ago, bucu said: Sure. "lspci" and "lspci -k" are two commands you will need to check your LSI card. You want to see if it is in a separate IOMMU group from other devices and your chipset. For me it was already in a separate group. There is a guide on what do to if it isn't in a separate group somewhere on the forums here. You can search that. You need to make sure your hardware is virtualization compatible as well. You want to make sure you have IOMMU enabled in your grub file on Proxmox. You can refer to the guide here: Xpenology on Proxmox Install Guide Another post that might be illuminating: PCI Passthrough IOMMU (proxmox forum) After those steps are taken. You need to update the system files with the changes made to the grub file. Those steps are also listed in the guide. Then modify your VM to passthrough the PCI device with the appropriate hardware ID that you saw by using "lspci" command. I personally restarted proxmox server before starting the VM. I am not sure if that step is necessary but it was what I did to make sure the changes were initialized. I'm a beginner at proxmox so if someone more knowledgeable than me can correct or add to this. Feel free! You can also check your dmesg on synology VM to see if there are any errors after completing the guide's steps. (possible errors can be driver conflict, driver not loading, card is not in IT mode (HBA)) Hope that helps R27. Well, I can passthrough lci to Ubunu VM for example, so from Proxmox setup standpoint everything works. Which boot loader did you use ? Quote Link to comment Share on other sites More sharing options...
bucu Posted January 13, 2021 Author Share #8 Posted January 13, 2021 I used 1.04 with the DS918 image and relevant PAT file. I want hardware transcoding for the future and currently the ds3617/15 don't support that otherwise I would use one of those builds. I have heard that using the ds3615/ds3617 images have less issues with running large arrays and 10gb depending on your needs though. There is a wonderful chart on the forums that shows some basic information about the different loaders/images. Quote Full disclosure, I'm still troubleshooting my setup and I'm not up and running. I couldn't get my 10gb to work so I deleted the old VM (having made tons of changes trying to get the drives working before finding what and where I needed to change things) but now after trying to do a new VM from the ground up with the custom extra files, I can't boot the VM and get a network IP to then log in and set it up. Still working and learning. Quote Link to comment Share on other sites More sharing options...
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.