Jump to content
XPEnology Community

unmesh

Member
  • Posts

    167
  • Joined

  • Last visited

Posts posted by unmesh

  1. I'm exploring various ideas for a NAS with a low physical and power footprint that could be placed remotely and one thought is to use a small 1L desktop with a 2.5" HDD. The largest one is 5TB but SMR and I'm hoping someone can share their experience with using SMR drives for backups of PCs or MacBooks which should present a sequential write workload. Are only random write problematic for SMR HDDs?

     

    Also, some of these drives are reported to support TRIM. I had only heard of TRIM in the context of SSDs which DSM only supports as caches, so would DSM support TRIM on a HDD and would that make a difference in this use case?

     

    Thanks

     

    P.S.: It is not really a hardware mod question but I wasn't sure where to post it

     

     

  2. What has worked for me in the past is to disconnect the old drives, add a new (in my case small size) drive and flash a new bootloader USB. I then install the last known DSM version onto the new drive using the same login and password. Then I add back the original drives and DSM offers to correct the version on them (it might be the migrate option but I don't remember). I let it do that, power down, remove the new drive, let it reboot with only the original drives and all was fine.

     

    I keep good records of what is running and save the bootloader images with their modified grub.cfg and the .pat files which helps. As does having a spare SATA port!

  3. When I try creating VMs with arkilee's .vma file, I get the following messages during the restore though DSM does come up

     

    WARNING: You have not turned on protection against thin pools running out of space.
    WARNING: Set activation/thin_pool_autoextend_threshold below 100 to trigger automatic extension of thin pools before they get full.

     

    What if anything do I need to do to fix this?

     

    Also, a display isn't configured but a serial port is. Using "qm terminal <vmid>" after starting the VM takes me some time. How do I open a terminal to look at the serial output without missing any characters?

     

    Thanks

  4. Large file transfers between the VM on the Lenovo TS140 and the HP N54L have historically been limited to approximately 95MB/s by the Gigabit Ethernet link.

     

    I got 10Gigabit NICs for both and have recently got around to testing performance. For the network, iperf3 shows about 9Gbps transfer between the two. The ESXi server happens to have a NVMe drive in addition to hard drives so I used the former for one end of the storage transfer. The HP only has hard drives though it is a modern one and dd writes from /dev/zero occur at 250MB/s and dd reads to /dev/null occur at 150MB/s after flushing the cache.

     

    However, using File Station and a remote CIFS mount to do network transfers from the NVMe to this hard drive bench at around 150MB/s and only 120MB/s in the opposite direction. Should there be that much dropoff or is this not a fair test?

     

    The CPU utilization on the weaker HP remains below 20% os it would seem that drivers and SMB aren't pegging it.

     

    I could add a 2.5" SATA drive to the HP and test with it even though the bays are SATA2 rather than SATA3 but was hoping for some insight from the community first.

  5. OP asked about putting the synoboot image in a datastore on a USB drive. I  wanted to create USB drives with the latest version of 6.5, 6.7 and 7.0 to test which ones would boot on a variety of PCs and learned that one could create a datastore on the ESXi boot drive itself! The following link was instrumental.

     

    https://www.horizonbits.com/2017/02/19/squeezing-esxi-on-usb/
     

    I haven't tried it as a datastore for deployment though.

  6. @bearcat,

     

    I could only boot with the USB stick with the wrong VID/PID and wanted to edit grub.cfg while running DSM. When I SSH'ed in, I noticed that there wasn't a /dev/synoboot1 available to mount. Having used FixSynoBoot.sh on ESXi, I remembered reading that it had helped with some baremetal installs so I tried it and it did!

     

    I will try and find some time to do a clean install

  7. The serial number is a previous serial number +1 and I don't remember how I got that one.

     

    The first MAC address is a Realtek based add-in NIC made by TP-Link while the second is the Broadcom embedded NIC

     

    I use Win32DiskImager too.

     

    And here is something new. I used Win32DiskImager to flash a USB disk with the image previously used for the N54L, booted the N40L with that stick and a new hard drive, installed DS3615xs 6.2.3, enabled SSH, installed FixSynoBoot.sh, mounted /dev/synoboot1, edited grub.cfg with the same information shown above, saved a copy of it to my laptop and rebooted.

     

    The N40L now shows the new serial number and MAC addresses!

     

    I remounted the .img that I had created for the N40L using OSFmount, this time as read-only, and opened up grub.cfg in an editor and it is identical to the one on the USB stick.

     

    Weird.

     

     

  8. I've been running fine on a baremetal N54L and was able to get my hands on a N40L to use for remote backup. In any case, I added a PCIe NIC to it, mounted the saved synoboot.img file from my N54L installation into OSFmount and edited grub.cfg with a new serial number, the MAC addresses of the two NICs (I realize that is not necessary) and the new VID:PID of the boot USB stick. I flashed the .img file to the stick using Win32DiskImager and booted the N40 without any hard drives

     

    It seemed to boot but then failed with several

    error: can't find command 'common_add_option'

    messages and one

    error: can't find command 'loadinitrd'

     message.

     

    I then took the USB stick from the N54L and booted up the N40L with it and it booted up fine to the extent that Synology Assistant saw the uninstalled DiskStation instance.

     

    So I then flashed the .img that I had used for the N54L onto the new USB stick and booted it. Again boots up into the uninstalled DiskStation instance.

     

    Here is the fragment that I changed in the grub.cfg in case someone can point out what I'm doing wrong with the edits

     

    # JetFlash Transcend Black/Red 2GB
    set vid=0x058F
    set pid=0x6387
    set sn=B3J4N01008
    # Realtek Gigabit
    set mac1=10feed039e03
    # HP N40L integrated NIC
    set mac2=e4115b1389bb
    set rootdev=/dev/md0
    set netif_num=2
    set extra_args_3615=''

     

    I have triple checked the USB VID:PID and I do a diskpart clean before flashing

  9. I recently upgraded one of my GbE NICs on a N54L to a 10GbE NIC and was gratified to see that SMB Multichannel still worked with 10+1 GbE on the server and 1+1 GbE on my Windows 10 desktop. Throughput for large file transfers was understandably capped at 200-ish MBps.

     

    I then bought another 10GbE NIC to do a similar upgrade on my ESXi7 server which is configured with two vswitches, one for each of its GbE NICs. While ESXi recognizes the Mellanox NIC and I can log into the server on either of the two IP addresses, SMB Multichannel performance is only around 100-ish MBps.

     

    smb.conf is identical on both systems and the only change was the NIC upgrade.

     

    Any thoughts on why this might be happening?

     

    Thanks

     

    P.S.: iperf3 does show 1Gbps and 10Gbps when bound to one interface vs the other

  10. I'd like to put together a x86/x64 1-bay NAS with a 10TB 3.5" hard drive and drop it off at a friend's location as a remote NAS backup target but the SFF/Thin-Client PCs I see are typically designed for 2.5" drives. Maybe I haven't looked hard enough and someone can suggest devices.

     

    An alternative would be to hang a USB 3.0 drive off a power sipping thin client running Xpenology which would look a little kudgey. I've found the instructions to have DSM use USB drives as internal drives but is this a bad idea?

     

    Thanks

  11. My (failing) memory though that was what I had done originally and I managed to find some notes that said I should use the first entry so you are correct.

     

    I also decide to compare all the BIOS settings against my notes and discovered that C1E had somehow gotten enabled :-(. Weak CMOS battery, perhaps? I suspect that putting back the old USB stick and hard drives will now work.

     

    The Mellanox is first in PCIe enumeration so the only way to have a 1Gbe first is to take the card out and optionally put the Intel NIC back. Good news is that the system booted and I could access the GUI at the Mellanox's IP address. Bad news was that a manual install of DSM 6.2.3 got stuck at 56%. Subsequent good news is that an automatic install of the latest DSM completed and the system is up again!

     

  12. So here are the steps I took on a Windows machine to prepare the USB stick:

     

    - Downloaded 3615 1.03b bootloader zip file from the repository and extracted synoboot.img

    - Used OSFmount to mount partition 0 as a writeable letter drive

    - Inserted a flash drive into a USB port and used USBdeview to determine its VID:PID

    - Navigated to the grub folder and edited grub.cfg to change the VID:PID, change the serial number, change/add MAC addresses for the Mellanox and the integrated NICs, and uncomment the menu items for AMD

    - Save the file, dismount the image

    - Use Win32 DiskImager to burn synoboot.img to a flash drive

    - Eject the flash drive, put it into the N54L with a single drive in the left most bay and power on

    - When Jun's menu shows up on the attached monitor, arrow down to the third/AMD item and hit return

    - Wait to see if Synology Assistant picks up this DSM or the DHCP Server shows a request/grant

    - Neither of the two happens

     

    I have a blind spot that is causing me to miss something basic

  13. I have 4 small/old hard drives in the N54L so I decided that I would buy a 12TB faster hard drive to go with my 10G NIC. However, when I replaced the 4 drives with this one and tried to reboot, I could not get to the web page which asks me about a new installation. I even booted off an Ubuntu USB stick to make sure the disk and the built-in and Mellanox NICs were still accessible; they were.

     

    I then put the original hard drives back but it still won't get on the network.

     

    I then flashed a 3615 1.03b bootloader image on a new flash drive with a new VID:PID but that won't boot either.

     

    What next? Although I am curous why the original setup is no longer working, I am okay to do a fresh install with just the new drive and restore the content from one of the other NASes.

  14. On 1/23/2021 at 8:28 AM, flyride said:

    I'm using the U-NAS 4-bay and 8-bay cases.  They are very compact but hard to build.

    I've been thinking of moving my ASrock Q1900DC-ITX to a smaller case, loading XPE and placing it in a friend's house as a remote backup instead of buying something like a DS220j.

     

    What makes the U-NAS 4-bay hard to build? (I see they have a 2-bay too)

     

    Thanks

×
×
  • Create New...