Jump to content
XPEnology Community

killseeker

Member
  • Posts

    42
  • Joined

  • Last visited

Posts posted by killseeker

  1. Really appreciate all the information you guys have posted over the last few hours.

     

    I've been wracking my brain over my test system.

     

    Its a basic system with 1 Sata Controller and 6 Sata Ports.

     

    I have 2 disks connected to the sata controller and my USB Thumb drive

     

    The whole time I've been using sataportmap=61 with an sataidxmap=0006

    The system will boot to the installer, but constantly give me an error on disks 2,3,4,6 message.

     

    logging in via telnet (thanks for that tip to run: http://IPADDRESS:5000/webman/start_telnet.cgi - so amazing) , I could see my drives were still being detected, but it was giving me the error on the non connected drives. So that made me think I need to remove them from the sataidxmap (only counting the number of drives connected to the controllers)

     

    dmesg |grep sd
    [    0.000000] ACPI: SSDT 0x00000000DD16F460 03493 (v01 SaSsdt SaSsdt   00003000 INTL 20091112)
    [    1.540586] sd 0:0:0:0: [sda] 7814037168 512-byte logical blocks: (4.00 TB/3.63 TiB)
    [    1.544529] sd 1:0:0:0: [sdb] 7814037168 512-byte logical blocks: (4.00 TB/3.63 TiB)
    [    1.544530] sd 1:0:0:0: [sdb] 4096-byte physical blocks
    [    1.544558] sd 1:0:0:0: [sdb] Write Protect is off
    [    1.544559] sd 1:0:0:0: [sdb] Mode Sense: 00 3a 00 00
    [    1.544572] sd 1:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
    [    1.588987] sd 0:0:0:0: [sda] 4096-byte physical blocks
    [    1.594283] sd 0:0:0:0: [sda] Write Protect is off
    [    1.599081] sd 0:0:0:0: [sda] Mode Sense: 00 3a 00 00
    [    1.604159] sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
    [    1.604214]  sdb: sdb1 sdb2 sdb5
    [    1.604620] sd 1:0:0:0: [sdb] Attached SCSI disk
    [    1.689131]  sda: sda1 sda2 sda5
    [    1.692694] sd 0:0:0:0: [sda] Attached SCSI disk
    [   20.320275] sd 2:0:0:0: [synoboot] 30031872 512-byte logical blocks: (15.3 GB/14.3 GiB)
    [   20.328828] sd 2:0:0:0: [synoboot] Write Protect is off
    [   20.334054] sd 2:0:0:0: [synoboot] Mode Sense: 43 00 00 00
    [   20.339787] sd 2:0:0:0: [synoboot] Write cache: disabled, read cache: enabled, doesn't support DPO or FUA
    [   20.357773] sd 2:0:0:0: [synoboot] Attached SCSI removable disk

     

    So after trying many other different configurations, I tried using:

    sataportmap=61 and sataidxmap=0002

     

    EG: 2 disks starting at 00 on the first controller, and 1 USB starting at 02 = 0002

     

    BAM It worked! it detected and installed DSM 7.0

     

    I do have a system partition failed message, but otherwise my 2 disk array is operational. Which is easily fixed.

     

    Is this the correct way to assign the sataidxmaps? I thought it needed to include all possible numbers of drives which could eventually be connected to the controllers. All night I thought I needed to use 0006 and above.

     

    Its 2:20am now and I'm pretty tired, hopefully this might help someone or be some useful findings. IF you want some other dumps from the test NAS let me know.

     

     

     

    testnas_image1.png

    testnas_image2.png

    • Thanks 1
  2. Had some issues with the lspci database. got it fixed.

     

     

    my DSM 6.2 has no Lspci database to run against. I've copied over the "pci.ids.gz" file from the tinycore boot loader to my DSM 6.2 install, and have an appropriate extract now 

    How does that look @Dvalin2 , @pocopico?

     

    would this system be compatible with Tinycore and DSM 7?

     

    lspci-v_primaryNAS-Extract_working.txt

  3. 8 minutes ago, Dvalin21 said:

    You don't have a pci SAS raid controller. You are using on board sata controller. It would seem there is better results with on board than pci card. Everything is being worked on providing info to get things fix.

     

    Thanks for responding Dvalin21. Yes I am using motherboard sata ports for this test server, as I don't want to just upgrade my production system until I know this is stable enough :)

     

    My primary NAS is using an LSI SAS 9300-8I PCI-E Controller

    Attached is the lspci -v command which I've run from my DSM 6.2 install

     

    Would this be compatible with the Tinyloader redpill?

     

    lspci-v_primaryNAS.txt

  4. Are there any enhancements coming to the sataportmap & diskidxmap detection processes?

     

    I've gone through the motions of building a basic 2 disk 6.2 system and then upgrading the 7.0 however during the process had issues with disk detection.

     

    Using a previously suggested option of using "SataPortMap=188 DiskIdxMap=0A0008" I've managed to get past the "something went wrong sataport shutdown error" however in doing so only 1 disk is visible in DSM 7.0 after upgrade.

     

    Every time I try to use the automatically detected SataPortmap and diskidxmap options I cannot see any disks in the DSM 7 installer.

  5. I've built a test system using the instructions and it ended up working great. I've tested my primary apps, Docker, Plex, Sonarr, SABNZB. All working well :) This is a great little tool @pocopico thank you :)

     

    I can see from the messages above that some look like they are converting production systems. Do we believe this is ready for primary usage?

     

    I am eager to finally upgrade my 6.2 system :).

  6. - Outcome of the installation/update: SUCCESSFUL

    - DSM version prior update: Reinstalled DSM 6.2-23739 Update 2 over the top after upgrade failure from 6.1.7

    - Loader version and model: JUN'S LOADER v1.03b - DS3615xs

    - Using custom extra.lzma: NO

    - Installation type: BAREMETAL - Gigabyte ga-h97n-wifi

    - Additional comments: As previously mentioned. On-board NIC Atheros no longer works after this update. On-board NIC Intel works OK (I have popped in a dual port intel gigabit card to retain link aggregation).

     

    Everything else seems to be working, except for the fact I cannot actually restart or shutdown the NAS. When issuing a restart or shutdown, DSM and all network connectivity is lost, except ICMP still respond pings. NAS is no longer visible on Synology Assist. I am forced to hard shutoff the NAS and power on again to restart.

     

    Overall I required major troubleshooting to get my upgrade from 6.17 to 6.2 to work.

  7. People seem to be talking about things fixed in 2.3 as if they already have it.... has it been released?

    No. Not officially. 2.3beta is available as an update in the Config Tool, but there is not enough room for drivers on old bootloader so it can get corrupted.

     

    I still recommend waiting a few more days. 2.3 will be a much better stable release and a lot more features and drivers than what is currnetly listed in 2.3.

     

    Sent from my SM-N920T using Tapatalk

     

     

    Thanks quicknick, looking forward to its release :smile:

  8. I've been playing around with Link Aggregation. From testing I identified that the Synology OS doesn't change the hash algorithm on the Network Bonding interface. This results in reduced outbound performance from your NAS (All TX traffic will exit the same NIC interface).

     

    After searching all over I cam across this link: http://forum.synology.com/enu/viewtopic ... 4c68d30be0

     

    To fix you need to add "xmit_hash_policy=layer3+4" to your Bonding Options within your "/etc/sysconfig/network-scripts/ifcfg-bond0"

     

    To make this permanent across reboots, you'll want to open /etc/sysconfig/network-scripts/ifcfg-bond0 and change the BONDING_OPTS setting and insert "xmit_hash_policy=layer3+4" behind the mode=4 setting. Result is as follows:

    BONDING_OPTS="mode=4 xmit_hash_policy=layer3+4 use_carrier=1 miimon=100 updelay=100 lacp_rate=fast"
    

     

    To change it back simply remove the xmit_hash_policy command and the default is Layer2.

     

    I found this extremely helpful and wanted to share it here, will be a good reference to keep in case anyone else comes across this.

     

    Cheers,

     

    Kill_Seeker

  9. Hey All,

     

    I'm having a strange issue at the moment where my x64 install of xpenology is working fine however my Network Interfaces gui is blank and stuck on Loading.

     

    I'm running 5565 Update 1

     

    Is there anything I can do to bring it back???

     

    I think it came around because I've installed a 2nd Network card in the device. Which went fine, the xpenology system saw the new Nic and added the devices. I tried to create a new bond0 and nothing happened. Upon reboot the blank gui started to appear.

     

    I know if this was a proper synology system I could hit the reset button once and clear my nic settings / bonds etc, however I don't have a button being a custom build.

     

    I do seem to have a large number of Ethernet interfaces in my CLI eth0 -> eth7 however only have my new Intel Dual Port card enabled.

     

    Any help would be great thank you.

     

    Cheers,

     

    Kill_Seeker

×
×
  • Create New...