• Content Count

  • Joined

  • Last visited

Community Reputation

0 Neutral

About C-Fu

  • Rank

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Yeah after I posted my question, I did follow up with him personally and he confirmed that he had tons of issues with his setup. About quicknick, AFAIK he pulled back his loader so I suppose only those who got his loader earlier would know more I guess. Oh well. One can only dream.
  2. How would you go about doing this? Just wanna see what's the maximum write speed that I can achieve so I can upgrade my network infrastructure accordingly Example: If my 13 disk SHR can achieve 400MB/s write speed, then I'll attach a quad gigabit or a 10G mellanox PCI-e card to it, with LACP-capable hardware and all. Preferably a tool that won't destroy existing data in the SHR.
  3. Weird: I clicked on add disk, and I can somehow add all of the ex-volume3 disks (1x3TB + 2x10TB). I thought I couldn't add the 3TB disk since the new /volume1 already has 3x6TB disks? With the exception of Note Station contents, everything seems to be working right up to the point where I left. 😁
  4. I took a (calculated) chance and did it. What I did: Mental note: with SHR, you can only add bigger or similar drives to your biggest current RAID setup (in my case, only 6TB when I wanted to add 3x3TB to it). Basic idea (guess): backup/clone the files take out the drives that contain the backup files destroy/format/wipe/fresh install DSM into all drives recreate SHR including the newer smaller empty drives restore backup reinstall apps, apps will automagically use the data(base) from the previously backed up files /volume1 (~20TB) - 5x3TB + 3x6TB /volume3 (~20TB) - 1x3TB + 2x10TB Empty drives: 3x3TB, created as Basic disks each (volume4 volume5 volume6) so it saves DSM data (didn't know DSM is installed in all initialized disks as RAID1, not needed if you didn't take out /volume3) Back up /volume1 using HyperBackup to /volume3. All apps, all configs, all (Just Shared Folders or all raid contents? Not sure. Need clarification) folders. Take out /volume3 drives so DSM reinstall doesn't format/empty these backed up drives (not needed if you know which physical drives that contain the backup files, and I didn't want to copy the hdd serials just to be sure) Reboot. Reinstall DSM from boot menu from fresh, same DSM version (apparently not needed to reformat/fresh install, didn't really know). SSH into DSM. sudo vi /etc.defaults/synoinfo.conf for maxdisks=24, usbportcfg (0x0), esatacfg (0x0), and internalportcfg (0xffffff) as well as support_syno_hybrid_raid = "yes" and #supportraidgroup="yes" to enable back SHR and increase the max disks to 24. Install HyperBackup. Shutdown. Plug in the 3 drives for /volume3. Turn on your pc. Remove /volume1 as well as Storage Pool 1. Remove the 3x3TB Basic disks' volumes and storage pools as the DSM data is still available in the 3 disks in /volume3. Reboot (just to be safe?). Create a new SHR out of the empty 3x3TB drives as well as 5x3TB + 3x6TB. It automatically creates /volume1 with a bigger SHR than the original. HyperBackup will detect the .hbk automagically in /volume3 when you click restore (didn't know how or why). It also restores the configuration of your previous installation (didn't know! awesome!). Gonna wait it out to see if the app databases, like Office, Drive, Note Station, Plex are restored as well when I install these apps back. Planned steps afterwards: Reinstall the apps after restore is done Hopefully the apps find the database files and use them instead of recreating new databases
  5. I don't mind reinstalling, as the data is backed up via HyperVisor, like Plex database. Unless you're telling me HyperBackup doesn't back up data on my /volume1 on things like Moments, Drive, plex database folder, etc.
  6. I know that. That's why I said I want to remove the SHR, then add all the smaller drives including new ones first while having a backup copy in the biggest drives. Please reread my post and tell me what you think, thanks
  7. So I have a bunch of drives in /volume1: 5x3TB 3x6TB (added later) Now I have a bunch more drives that currently in /volume3 (/volume2 was deleted, just 1 SSD for VM storage): 1x3TB 2x10TB and a bunch of unused drives: 3x3TB So I wanted to add all of the drives into one big /volume1. So... 1. I backed up /volume1 using HyperBackup into /volume2. 2. delete/destroy /volume1, 3. add all 3TB drives first (minus the one 3TB drive in /volume2) as well as the 3x6TB drives into the newlybuilt /volume1. All in SHR. 4. restore the HyperBackup from /volume2, destroy /volume2 5. Expand/add the 2x10TB from /volume2 into /volume1. So I'm left with 1x3TB unused. Questions: 1. Will this work? 2. What will happen to my DSM and apps during deletion, creation of SHR, and during restoration from HyperBackup? 3. Will the newly created volume/storage pool be in /volume1 or /volume4 after I removed /volume1? 4. Anything else I need to worry about? Because /volume1 used size is 20TB, and the HyperBackup in /volume2 is 18.4TB in size. Cheers and thanks!
  8. Because of the 24/26 disks hardlimit of syno/xpenology, I was trying to figure out if there is a way of combining multiple Xpenology rigs to create a combined volume. Example: DS3617 1TB /volume1/FolderA/SubFolderA,B,C DS3615 2TB/volume1/FolderA/SubFolderC,D,E 3TB /volume1/FolderB/SubFolderX,Y,Z Result: DS36157 3TB /volume1/FolderA/SubFolderA,B,C,D,E 3TB /volume1/FolderB/SubFolderX,Y,Z Is there a way? Currently reading up on GlusterFS and LizardFS and seems like the right idea/the future. Is there a way to do this? Clients will only see shares from DS36157. I read up on High Availability, and seems like that's more on having a redundant/failover Xpenology system. Also, how would I design the whole system? 2 Xpe box and 1 Xpe "master" or something? Cheers and thanks!
  9. Any updates as to the current status with DSM 6.2 with regards to > 24 drives? Saw a video where a guy successfully made a > 48 SHR array or something like that.
  10. This would be awesome for mine that's sitting dust even the forum was taken down by seagate but I don't think it works since it's not x86. Last I read the blackarmor forums was someone removing the blackarmor web UI and taking everything commandline, with raid provided by a native linux madm or something - increasing the speed quite drastically.
  11. So does this chipset support then? Hey I don't mind hooking up just one SATA drive to each PCI-e x1 ports available. That's like 18 additional SATA ports
  12. physically might be difficult, but even if it works with 1/3 of the pci-e ports it'll still be awesome I think
  13. Well that's strange, it works with my current setup though, a Z97 box Anyway this is a random China stuff with no documentation and no "brand" so I don't know where would the proper site be Although I remembered seeing a video before on youtube, someone using a bunch of these for FreeNAS and it worked fine. If you searched for SU-SA3014 there'll be like a billion vendors with this model.
  14. How would we know if this is a port multiplier? And why is it a bad idea? Sorry new here
  15. I suppose my problem is a bit unique. There are no > 2U servers where I live. I've been searching for years trying to find 4U servers, let alone 4U server that's full of HDD trays at the front. You can. However this is a spare, unused mobo and cpu that I have lying around. I reckon the mobo might be cheap now that GPU mining isn't like in 2017 anymore. Mind you that *IF* this is possible, you're looking at a cheap board plus a bunch of 4 sata cards that can hold a (theoretical) bunch of AT LEAST 80 drives. That's bigger (and cheaper) than any other solution I think. Just wanna see if anybody had any experience with this board before I start. I have no DDR4 RAM at the moment, so I'm just collecting infos and ideas at the moment. But thanks for replying!