Jump to content
XPEnology Community

truemanager

Rookie
  • Posts

    7
  • Joined

  • Last visited

  • Days Won

    1

truemanager last won the day on July 20 2023

truemanager had the most liked content!

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

truemanager's Achievements

Newbie

Newbie (1/7)

3

Reputation

  1. Hi, Some news after a lot of debug with the Linux kernel: - When using RDMs with the PVSCSI controller, different reports of the SCSI INQUIRY are obtained based on physical or virtual connection: - With pRDMs (passthrough for SMART support of a SATA disks) the INQUIRYDATA Byte[7] returns '49' (0b00110001). - With vRDMs (the same disk connected "virtual" without SMART support) the INQUIRYDATA Byte[7] returns '114' (0b01110010). This table shows the content of the INQUIRYDATA messages: typedef struct _INQUIRYDATA { UCHAR DeviceType : 5; UCHAR DeviceTypeQualifier : 3; UCHAR DeviceTypeModifier : 7; UCHAR RemovableMedia : 1; UCHAR Versions; UCHAR ResponseDataFormat : 4; UCHAR HiSupport : 1; UCHAR NormACA : 1; UCHAR ReservedBit : 1; UCHAR AERC : 1; UCHAR AdditionalLength; UCHAR Reserved[2]; UCHAR SoftReset : 1; UCHAR CommandQueue : 1; UCHAR Reserved2 : 1; UCHAR LinkedCommands : 1; UCHAR Synchronous : 1; UCHAR Wide16Bit : 1; UCHAR Wide32Bit : 1; UCHAR RelativeAddressing : 1; UCHAR VendorId[8]; UCHAR ProductId[16]; UCHAR ProductRevisionLevel[4]; UCHAR VendorSpecific[20]; UCHAR Reserved3[2]; VERSION_DESCRIPTOR VersionDescriptors[8]; UCHAR Reserved4[30]; } INQUIRYDATA, *PINQUIRYDATA; You can see here that with pRDM the CommandQueue value is FALSE and with vRDM the CommandQueue is TRUE. My assumption is that the ESXi server is sending the CQ flag off because it doesn't test the type of the disk connected (to be safe). But for SATA disks with NCQ support the disk can handle multiple commands. And in fact it works... in a Linux kernel >3.x the queue_depth works between 1-31 values, and the performance is degraded when you set it to 1. However, inside the XPE it seems that the DSM is enforcing the queue_depth to 1 because it checks the value of the "tagged_supported" value of the "scsi_device" structure. This value is set to "1" only if the INQUIRYDATA includes the flag CommandQueue. You can check the source file "scsi_scan.c" in the kernel and search for the function "scsi_add_lun()" and you will see something like: if ((sdev->scsi_level >= SCSI_2) && (inq_result[7] & 2) && !(*bflags & BLIST_NOTQ)) { sdev->tagged_supported = 1; sdev->simple_tags = 1; } Therefore, my idea is one of these: - In the redpill-lkm module create a new shim function to filter this INQUIRYDATA query. And pass the CQ flag on. Perhaps it could be done automatically (checking the disk and only if the controller is PVSCSI). - Another option is to introduce a modification of the PVSCSI driver to do a similar filtering. Perhaps it has more sense because only users with this controller will need this. Then with a a boot parameter (like pvscsi=cq:2,3,5) you can manually enable the filtering for the disks that you're sure that have NCQ support. What you think? And why I want to obtain this? I'm trying to enable full SMART support over PVSCSI. I've readed a lot and I feel I can put some new code in the SCSI SHIM to use the SAT protocol to pass the SMART commands, instead of a burden fake emulation like now. But before implementing this is required to solve the problem of the queue_depth because this is limitating a lot the performance using the pRDM connection. And if we can fix this, then it will have sense to implement after the full SMART passthrough because the performance will be the same using virtual or physical connection. You want to help me to do it?
  2. Thank you for sharing my patch with the github user. However, to fully use the patch it's necessary to promote it to the rest of platforms and generate the binaries and include them inside ARC loader as a custom modules. Futhermore, a simple insmod perhaps could be interesting to execute it at boot. Regarding the use of this, I'll prepare a guide to shrink disks. In the meantime think on this: When you have a LVM group (vg1) to can attach ANY other device to it (after a simple "pvcreate /dev/XXX"). And then you can "move" (with "lvm pvmove /dev/SRE /dev/DST"). All in realtime. Therefore with these "missing" modules, you can continue using the volumes when moving them to a different device. The trick is then: 1) shrink the volumes; 2) shrink the LVM containers; 3) add new scratch space to the LVM group; 4) move the volumes to the new space (dm-mirror required); 5) remove the old disks from the LVM; 6) attach "new" (smaller) disks; 7) setup the new disks without creating a new group (or remove the one created); 8 ) manually add the new disks to the LVM group; 9) re-move the volumes to the new space (dm-mirror required again); 10) remove the scratch space from the LVM group... 11) reboot. The "new" LVM group will take the space the smaller disks. I hope soon all RP loaders will include these necessary modules. Regards.
  3. Hi, I need your help to fix this problem: I'm using the Physical RDM functionality of ESXi to pass SATA disks directly to DSM. I discovered that with this pass-through I can read the real SMART values using the "smartctl -d sat" mode. You can check the difference calling to the same disk using "smartctl -d sat" and "smartctl -d ata". If the disk is a SATA disk, connected over PVSCSI controller and using pRDM, then you will see the difference between the "fake" SMART values (-d ata) vs the "real" SMART values (-d sat). The problem is the QUEUE_DEPTH of the disks connected using this mode. You can see the values with "lsscsi -l" (if you have installed the tool) or with "cat /sys/block/sd*/device/queue_depth". The different queue values that you can obtain are: Virtual VMDK disk over PVSCSI: 31 (OK) pRDM SATA disk over SATA: 31 (OK, but with troubles) pRDM SATA disk over PVSCSI: 1 (INCORRECT) If you compare the performance, then you will see that with pRDM disks and the PVSCSI controller inside DSM the write performance is low (~50%). But with the SATA (AHCI) controller the performance for one disk is OK, but the lantency increases a lot (~500%). Futhermore, using multiple pRDM disks with the SATA controller the system could generate a failure with the Supervisor (purple screen of death). Therefore, the best option is to use pRDM and PVSCSI in any case (or vRDM if you don't need SMART). I'm then in the way to "fix" the queue_depth value. I'm reading a lot of information of the kernel sources searching for the root cause that implies to set the value to "1". Until now I suspect that the internal struct of the SCSI device has the value "tagged_supported" to "0". Because this enforces the "qdepth" to 1 in the "vmw_pvscsi.c" driver when using the "function pvscsi_change_queue_depth()". I comment that because executing "echo 31 > /sys/block/sd*/device/queue_depth" doesn't change the value. So I request help to found a solution. Any idea to debug the DSM kernel? Any help with this problem? I need to say that booting a recent Linux LiveCD with the same disk configuration on the VM the queue depth is correct (31). Therefore, this is not a fixed/required value and we can "fix" the problem.
  4. Hi, You can found modules for 7.2 here And if you need for "other" platform, then ask developers to include my patch in the "stock" loaders. Regards.
  5. Hi, you can found the patch to support it by "stock" in ARC (and others) in this post:
  6. Hi, This is based on this good-old post from Totalnas: h*t*t*p*s*://xpenology.com/forum/topic/418-kernel-hijack/?do=findComment&comment=3715 The problem is that the Synology kernel doesn't include by default the kernel module to support the "pvmove" LVM command. And YES, this is required to shrink volumes and change disks without touching/removing/deleting the volumes (I'll explain it in the future). So I've created a patch for the AuxXxilium/arc-modules to generate the modules (dm-mirror.ko, dm-log.ko and dm-region-hash.ko). The patch is at the end. More or less is very simple: for the kernel sources it adds the modules and compile it (I've only added the changes for the "broadwellnk-4.4.302" platform (DSM 7.2 with DS3622xs+)). Therefore, I suggest to add it to the main repository and promote it to all platforms. Then you can use it when using the good ARC Loader. You agree? So please PocoPico and AuxXxilium can you merge this patch to https://github.com/AuxXxilium/arc-modules ? Thank you. dm-mirror-broadwellnk-4.4.302.zip dm-mirror.patch
  7. Hi, In order to manage from the CLI the volume/patitions (remember that you can resize/shrink BTRFS volumes on-line) it's necessary to have the **FULL** LVM tools. Calling to the "lvm" binary to have access to all tools. However, missing kernel modules don't permit to MOVE VOLUMES OVER PHYSICAL DEVICES. The missing kernel modules are "dm-mirror" (and the related "dm-log" & "dm-region-hash"). So please, could you add these kernel modules to driver packages? Thank you. Read more here: http://www.gebi1.com/thread-80754-1-1.html
×
×
  • Create New...