Jump to content
XPEnology Community

Diverge

Member
  • Posts

    355
  • Joined

  • Last visited

Posts posted by Diverge

  1. Thx, now it works when the disk is attached at startup, no hotplug. Do i have to change something in bios?

     

    There's nothing to set in DSM - I have similar mod to make one drive as esata, and it works fine hotswapping. Sounds like it's related to your system (motherboard or bios setting, ect).

  2. Hi everyone,

     

    i have a Intel SS4200-E with 4 internal SATA ports, 2 eSATA ports and 4 USB ports. The 4 internal sata-ports show up as disk 1-4, the esata ports show up as 5-6, the USB ports show up as external devices.

     

    Now i want the esataports to use as external device and not as internal device.

     

    I tried following config:

     

    internalportcfg="0xf"

    esataportcfg="0x30"

    usbportcfg="0xf00000" (standard value)

    maxdisks="12" (standard value)

     

    But when i try this config the drives attached to the esata ports don't show up in DSM. They don't show up in the device manager nor the external devices. Can you give me a hint what i am doing wrong?

     

    Greetings holybabel

     

    You can't just truncate the values like that. You need to leave all the zeros and the exact number of places. You only want to change bits.

     

    Stock settings:

    esataportcfg="0xff000"
    usbportcfg="0xf00000"
    internalportcfg="0xfff"

     

    you'd want:

    esataportcfg="0xff0f0"
    usbportcfg="0xf00000"
    internalportcfg="0xf0f"

     

    Think of it this way, all hardrive types = 0xffffff

    each line, usb, internal, sata, takes from that pool.

    each value (hex) equal 4 bits = 4 drives.

    0xffffff <--- represents 24 disks total, the most right value = drives 1-4, next one, 5-8, ect, ect.

  3. Thanks Diverge, but I think you may have missed the point. Adding the VID and PID to the syslinux.cfg file is what this thread is about, but the query I'm having is the step that then lets you mount what is then known to the sytem as /dev/synoboot. Getting access to the USB stick itself is not easy for me, so it is handy to be able to mount it to make alterations to the files remotely, but at the moment I can't mount it.

     

    Yeah, I guess I missed that part :oops:

  4. Or you use the built in option that the devs put in. The following is an example of the syslinux.cfg on your usb drive. just edit the the VID and PID #'s with that of your usb drive.

     

    UI menu.c32
    PROMPT 0
    TIMEOUT 50
    DEFAULT xpenology
    MENU TITLE XPEnoboot 5.1-5055.1-19c83d5
    
    LABEL xpenology
          MENU LABEL XPEnology DSM 5.1-5055
          KERNEL /zImage
          APPEND root=/dev/md0 ihd_num=0 netif_num=4 syno_hw_version=DS3615xs sn=B3J4N01003 vid=0x0EA0 pid=0x2168 loglevel=0 vga=0x305 rmmod=ata_piix
    
    LABEL debug
          MENU LABEL XPEnology DSM 5.1-5055 Debug
          KERNEL /zImage
          APPEND root=/dev/md0 ihd_num=0 netif_num=4 syno_hw_version=DS3615xs sn=B3J4N01003 vid=0x0EA0 pid=0x2168 loglevel=0 vga=0x305 debug=1 console=ttyS1,115200 rmmod=ata_piix
    
    LABEL install
          MENU LABEL XPEnology DSM 5.1-5055 Install/Upgrade
          KERNEL /zImage
          APPEND root=/dev/md0 ihd_num=0 netif_num=4 syno_hw_version=DS3615xs sn=B3J4N01003 vid=0x0EA0 pid=0x2168 loglevel=0 vga=0x305 upgrade=5.1-5055 rmmod=ata_piix

  5. I personally wouldn't buy anything but a LSI 9211-8i, or Dell Perc H310 from ebay from $50-$100. Just flash it with IT mode firmware and you're good to go. I'd avoid the ones from china though... as they are probably cheap clones.

  6. I found some info on the LSI drive mapping here:

    http://unix.stackexchange.com/questions ... id-systems

    https://wiki.debian.org/Persistent_disk_names

     

    Apparently since DSM 5.1, synology uses udevadm, but it doesn't seem to have the port mapping correct. I have the first 2 drives connected to ports 0, and 1, and the 3rd connected to port 7. You can see it doesn't have anything to show that:

     

    DSM-Test> udevadm info --query=path --name=/dev/sda
    /devices/pci0000:00/0000:00:15.0/0000:03:00.0/host0/port-0:0/end_device-0:0/target0:0:0/0:0:0:0/block/sda
    DSM-Test> udevadm info --query=path --name=/dev/sdb
    /devices/pci0000:00/0000:00:15.0/0000:03:00.0/host0/port-0:1/end_device-0:1/target0:0:1/0:0:1:0/block/sdb
    DSM-Test> udevadm info --query=path --name=/dev/sdc
    /devices/pci0000:00/0000:00:15.0/0000:03:00.0/host0/port-0:2/end_device-0:2/target0:0:2/0:0:2:0/block/sdc
    DSM-Test>
    

    but if you look at the dmesg, you'll see that the phy number correlates to the real port numbers:

    DSM-Test> dmesg |grep scsi
    [    0.725581] scsi0 : Fusion MPT SAS Host
    [    2.956387] scsi 0:0:0:0: Direct-Access     WDC      WD740GD-00FLA1           8D27 PQ: 0 ANSI: 6
    [    2.956403] scsi 0:0:0:0: SATA: handle(0x0009), sas_addr(0x4433221100000000), phy(0), device_name(0x0000000000000000)
    [    2.956408] scsi 0:0:0:0: SATA: enclosure_logical_id(0x5000000080000000), slot(3)
    [    2.956536] scsi 0:0:0:0: atapi(n), ncq(n), asyn_notify(n), smart(y), fua(n), sw_preserve(n)
    [    2.956541] scsi 0:0:0:0: qdepth(32), tagged(1), simple(0), ordered(0), scsi_level(7), cmd_que(1)
    [    3.202241] scsi 0:0:1:0: Direct-Access     WDC      WD5000AAKS-00UU3A0       3B01 PQ: 0 ANSI: 6
    [    3.202256] scsi 0:0:1:0: SATA: handle(0x000a), sas_addr(0x4433221107000000), phy(7), device_name(0x0000000000000000)
    [    3.202261] scsi 0:0:1:0: SATA: enclosure_logical_id(0x5000000080000000), slot(4)
    [    3.202420] scsi 0:0:1:0: atapi(n), ncq(y), asyn_notify(n), smart(y), fua(y), sw_preserve(y)
    [    3.202426] scsi 0:0:1:0: qdepth(32), tagged(1), simple(0), ordered(0), scsi_level(7), cmd_que(1)
    [    3.456442] scsi 0:0:2:0: Direct-Access     WDC      WD740GD-00FLA1           8D27 PQ: 0 ANSI: 6
    [    3.456459] scsi 0:0:2:0: SATA: handle(0x000b), sas_addr(0x4433221101000000), phy(1), device_name(0x0000000000000000)
    [    3.456464] scsi 0:0:2:0: SATA: enclosure_logical_id(0x5000000080000000), slot(2)
    [    3.456588] scsi 0:0:2:0: atapi(n), ncq(n), asyn_notify(n), smart(y), fua(n), sw_preserve(n)
    [    3.456594] scsi 0:0:2:0: qdepth(32), tagged(1), simple(0), ordered(0), scsi_level(7), cmd_que(1)
    [    8.585973] sd 0:0:0:0: Attached scsi generic sg0 type 0
    [    8.586282] sd 0:0:1:0: Attached scsi generic sg1 type 0
    [    8.586623] sd 0:0:2:0: Attached scsi generic sg2 type 0
    DSM-Test>
    

     

    edit: a little info that I just found that correlated with my observation about the phy #: https://utcc.utoronto.ca/~cks/space/blo ... uxSASNames

     

    edit2: the last post of this thread seems pretty interesting https://forums.freenas.org/index.php?th ... der.15286/ I'll have to try to play with LSIUtil when I get a chance.

     

    edit3: interesting features from lsiutil manual:

     

    Figure 2.13 Changing SAS I/O Unit Settings
    
    SATA Maximum Queue Depth: [0 to 127, default is 32]
    Device Missing Report Delay: [0 to 2047, default is 0]
    Device Missing I/O Delay: [0 to 255, default is 0]
    PhyNum Link MinRate MaxRate Initiator Target Port
    0 Enabled 1.5 3.0 Enabled Disabled Auto
    1 Enabled 1.5 3.0 Enabled Disabled Auto
    2 Enabled 1.5 3.0 Enabled Disabled Auto
    3 Enabled 1.5 3.0 Enabled Disabled Auto
    4 Enabled 1.5 3.0 Enabled Disabled Auto
    5 Enabled 1.5 3.0 Enabled Disabled Auto
    6 Enabled 1.5 3.0 Enabled Disabled Auto
    7 Enabled 1.5 3.0 Enabled Disabled Auto
    Select a Phy: [0-7, 8=AllPhys, RETURN to quit] 0
    Link: [0=Disabled, 1=Enabled, default is 1]
    MinRate: [0=1.5 Gbps, 1=3.0 Gbps, default is 0]
    MaxRate: [0=1.5 Gbps, 1=3.0 Gbps, default is 1]
    Initiator: [0=Disabled, 1=Enabled, default is 1]
    Target: [0=Disabled, 1=Enabled, default is 0]
    Port: [0 to 7 for manual config, 8 for auto config, default is 8]
    Persistence: [0=Disabled, 1=Enabled, default is 1]
    Physical mapping: [0=None, 1=DirectAttach, 2=EnclosureSlot, default is 0]

     

    edit4: made the changes from Auto to port# matching phy#. made no difference...

  7. See my comment here: viewtopic.php?f=2&t=5026&start=1020#p38031

     

    To sum it up, LSI cards have never mapped the physical port #'s in order. Drive labels have always gotten assigned by which ever drive spooled up and was seen first on boot. Onboard has always retained some kind of consistent mapping though. No sure why this is, but if this LSI issue can be fixed I'd donate some $ to whoever fixes it, since it messes up me being able to make a specific drive bay into an esata slot for backups... right now I have to pull the drive on boots, and re-insert it after DSM boots so it doesn't mess up the array.

  8. Hello :smile:

     

    More info would be helpful for someone to help you:

    1) what version bootloader are you using? What version of DSM?

    2) what is your hardware? Software, esxi version, ect?

    3) where are you seeing this message?

    4) what is the EXACT message? take screen capture if possible.

  9. md0 is degraded because your not using all 12 slots. Examine md0, md1, md2, md3, ect, and you'll see they all say degraded.. I think you can ignore that.

     

    But I assume I can not ignore the (E) flag/state of md0:

    DiskStation> cat /proc/mdstat
    Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4]
    md3 : active linear sdb3[0] sdd3[2] sdc3[1]
         3207050304 blocks super 1.2 64k rounding [3/3] [uUU]
    
    md2 : active raid1 sda3[0]
         3666240 blocks super 1.2 [1/1] [u]
    
    md1 : active raid1 sda2[0] sdb2[1] sdc2[2] sdd2[3]
         2097088 blocks [12/4] [uUUU________]
    
    md0 : active raid1 sda1[0](E)
         2490176 blocks [12/1] [E___________]
    
    unused devices: 

     

    I did not find any other why to fix it except using mdadm --stop and mdadm --assemble --force -v like described in the above linked web pages.

     

    Is it possible to overwrite the system partition without loosing settings and data? Thinking of the procedure of an DSM upgrade or migration process.

     

    I'm pretty sure all your DSM settings are on that partition. But you could always dump a copy with 'dd' to revert back, ect. You could always try to edit the version number and then upgrade it with the same version to see if it fixes it and gives you the option to migrate your settings, and retain your data. But if I were you, i'd remove my data array before trying stuff, so it doesn't get messed up somehow.

  10. Didn't you take a snapshot???

     

     

    FYI, snapshot will do nothing for you. It will only be a snapshot of your VM config, bootloader image... which doesn't even get written to. The actual DSM install, and DSM config files is on the 1st partition of each storage disk... so unless you're backing up your storage disks, snapshot is usesless for XPEnology.

  11. Hello.

    I try to solve problems with spam on this week.

     

    Make it a paid site for new members. $5 in order to see and post on the forums. That will help pay for web hosting... and if spammers want to pay $5 to spam your site, then they get banned. Win win.

    Not a very good idea imo. Would you pay?

     

    Sure. It isn't much.

  12. anyone?

     

    My suggestion to you is do lots of searching and reading about data recovery. You could try here http://forum.cgsecurity.org/phpBB3/ these guys are experts in data recovery and may be able to help you. Be prepared to spend lots of time. I lost an array once, and it took me about 6 months before I was able to recover my data... someone from the testdisk forums helped me a lot, and the rest took persistence and luck.

     

    I documented my experience here http://forum.cgsecurity.org/phpBB3/foun ... t2600.html Good luck :!:

  13. Hello.

    I try to solve problems with spam on this week.

     

    Make it a paid site for new members. $5 in order to see and post on the forums. That will help pay for web hosting... and if spammers want to pay $5 to spam your site, then they get banned. Win win.

  14. md0 is degraded because your not using all 12 slots. Examine md0, md1, md2, md3, ect, and you'll see they all say degraded.. I think you can ignore that. Here's what mine looks like:

     

    DiskStation> mdadm --detail /dev/md0
    /dev/md0:
           Version : 0.90
     Creation Time : Fri Dec 31 19:00:03 1999
        Raid Level : raid1
        Array Size : 2490176 (2.37 GiB 2.55 GB)
     Used Dev Size : 2490176 (2.37 GiB 2.55 GB)
      Raid Devices : 12
     Total Devices : 4
    Preferred Minor : 0
       Persistence : Superblock is persistent
    
       Update Time : Wed May 27 17:54:13 2015
             State : clean, degraded
    Active Devices : 4
    Working Devices : 4
    Failed Devices : 0
     Spare Devices : 0
    
              UUID : cb413e5c:819a4ff3:3017a5a8:c86610be (local to host DiskStation)
            Events : 0.762227
    
       Number   Major   Minor   RaidDevice State
          0       8        1        0      active sync   /dev/hda1
          1       8       17        1      active sync   /dev/sdb1
          2       8       33        2      active sync   /dev/sdc1
          3       8       49        3      active sync   /dev/hdd1
          4       0        0        4      removed
          5       0        0        5      removed
          6       0        0        6      removed
          7       0        0        7      removed
          8       0        0        8      removed
          9       0        0        9      removed
         10       0        0       10      removed
         11       0        0       11      removed
    DiskStation>
    

     

    I'm not sure why some of my disks are labled hdx vs sdx... but this isn't the 1st time I've seen it. Kinda odd...

     

    I'm not sure why you can't mount it linux. I've never tried doing that on anything but the storage array. Maybe just try mounting the first partition of the disk /sdb1 :?: If you can, set aside your disks, and make a set of new test disks configured how your current system is, and see how the md#'s are configured.

  15. Any news on 5.2 progress? Which boards, threads,etc are people discussing it on? Please don't take this as me being impatient I understand it takes some time. I am just wondering where I can keep tabs on any progress and maybe learn a thing or two about the process.

     

     

    viewtopic.php?f=2&t=5026&start=870#p37030

     

    Are there any news from the developers about dsm 5.2?

     

    Yes - expect a release for DSM 5.2 very soon. I'm doing some (hopefully) final testing tonight.

  16. Then that makes sense why you have md2 and md3.

     

    I'm still unsure about your md0. Normally it usually has a partition on every disk in the system (besides the bootloader).... but maybe it's different if you initially have a single disk volume, then create another volume with other disks. But then again, the swap partition (md1) is mirrored on each of your disks...

  17. Is there any secret and not documented command to check the disks and set the state back to normal (I assume there will be no problem, cause I can access all files and folders stored in the volumes)??

     

    I know synology support can fix it over ssh. But I haven't seen it mentioned anywhere publicly how they manage to do it. I've had this happen a while back, and was forced to just backup my data and start over.

     

    It looks like you have 4 disks. all 4 disks should be listed in each of the md#'s. Each storage disk's first partition is for OS (DSM), and they are all mirrors of each other. Each disk should have a listing under md0, similar to my system below:

     

    DiskStation> cat /proc/mdstat
    Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4]
    md2 : active raid5 sda5[0] sdd5[3] sdc5[2] sdb5[1]
         8776594944 blocks super 1.2 level 5, 64k chunk, algorithm 2 [4/4] [uUUU]
    
    md1 : active raid1 sda2[0] sdb2[1] sdc2[2] sdd2[3]
         2097088 blocks [12/4] [uUUU________]
    
    md0 : active raid1 sda1[0] sdb1[1] sdc1[2] sdd1[3]
         2490176 blocks [12/4] [uUUU________]
    
    unused devices: 
    DiskStation>
    

     

    You're also missing a disk from your storage array... so all your data will probably not be there. the disk listed with md2 should be listed with the ones in md3 most likely.

  18. Boot them up in a linux system and see if you can use your array. DSM is a PITA when it thinks it's crashed, and it really hasn't.

     

    edit: check from SSH if the volume is still accessible before trying the above... I've seen DSM say an array was crashed, but yet be working via a terminal session.

     

    Ill try it it tomorow.

    Also I'm terrible at terminal where is the array usually mounted to so I can check?

    DiskStation> cd /
    DiskStation> pwd
    /
    DiskStation> ls -l
    drwxr-xr-x    2 root     root          4096 Mar  1 18:55 bin
    drwxr-xr-x   10 root     root         53248 Dec 30 17:36 dev
    drwxr-xr-x   35 root     root          4096 May 25 04:05 etc
    drwxr-xr-x   32 root     root          4096 Dec 30 17:36 etc.defaults
    drwxr-xr-x    2 root     root          4096 May 30  2014 initrd
    drwxr-xr-x   23 root     root         20480 May 22 14:51 lib
    drwxr-xr-x    2 root     root          4096 Jun 11  2014 lib64
    drwx------    2 root     root          4096 May 30  2014 lost+found
    drwxr-xr-x    2 root     root          4096 May 30  2014 mnt
    dr-xr-xr-x  153 root     root             0 Dec 30 17:36 proc
    drwxr-xr-x    7 root     root          4096 Mar 28 09:16 root
    drwxr-xr-x    7 root     root           180 May  5 07:29 run
    drwxr-xr-x    2 root     root          4096 Sep  9  2014 sbin
    drwxr-xr-x   12 root     root             0 Dec 30 17:36 sys
    drwxrwxrwt   16 root     root          2020 May 25 20:11 tmp
    drwxr-xr-x    8 root     root          4096 Jun 11  2014 usr
    drwxr-xr-x   16 root     root          4096 Mar  1 18:55 var
    drwxr-xr-x   11 root     root          4096 Jun 11  2014 var.defaults
    drwxr-xr-x   20 root     root          4096 May  5 07:26 volume1
    drwxr-xr-x    5 root     root          4096 Feb 26 12:32 volumeSATA1
    DiskStation> cd /volume1/
    DiskStation> ls -l
    drwxrwxrwx   15 root     root          4096 May 22 14:51 @appstore
    drwx------    2 root     root          4096 May  5 07:29 @autoupdate
    drwxr-xr-x    5 admin    users         4096 Mar  1 18:42 @database
    drwxr-xr-x    9 admin    users         4096 Mar 27 07:34 @download
    drwx------    2 root     root          4096 Jun  5  2014 @iSCSITrg
    drwxr-xr-x    2 root     root          4096 Sep 30  2014 @smallupd@te_deb
    drwxrwxrwx    4 root     root          4096 May  5 07:29 @spool
    drwxrwxrwt   39 root     root          4096 May 25 04:00 @tmp
    drwx------    5 admin    users         4096 Jun  5  2014 Plex
    ...
    DiskStation>
    

    /volume1/ is where your array data should be.

×
×
  • Create New...