• Content Count

  • Joined

  • Last visited

Community Reputation

0 Neutral

About C-Fu

  • Rank
  1. Because of the 24/26 disks hardlimit of syno/xpenology, I was trying to figure out if there is a way of combining multiple Xpenology rigs to create a combined volume. Example: DS3617 1TB /volume1/FolderA/SubFolderA,B,C DS3615 2TB/volume1/FolderA/SubFolderC,D,E 3TB /volume1/FolderB/SubFolderX,Y,Z Result: DS36157 3TB /volume1/FolderA/SubFolderA,B,C,D,E 3TB /volume1/FolderB/SubFolderX,Y,Z Is there a way? Currently reading up on GlusterFS and LizardFS and seems like the right idea/the future. Is there a way to do this? Clients will only see shares from DS36157. I read up on High Availability, and seems like that's more on having a redundant/failover Xpenology system. Also, how would I design the whole system? 2 Xpe box and 1 Xpe "master" or something? Cheers and thanks!
  2. Any updates as to the current status with DSM 6.2 with regards to > 24 drives? Saw a video where a guy successfully made a > 48 SHR array or something like that.
  3. This would be awesome for mine that's sitting dust even the forum was taken down by seagate but I don't think it works since it's not x86. Last I read the blackarmor forums was someone removing the blackarmor web UI and taking everything commandline, with raid provided by a native linux madm or something - increasing the speed quite drastically.
  4. So does this chipset support then? Hey I don't mind hooking up just one SATA drive to each PCI-e x1 ports available. That's like 18 additional SATA ports
  5. physically might be difficult, but even if it works with 1/3 of the pci-e ports it'll still be awesome I think
  6. Well that's strange, it works with my current setup though, a Z97 box Anyway this is a random China stuff with no documentation and no "brand" so I don't know where would the proper site be Although I remembered seeing a video before on youtube, someone using a bunch of these for FreeNAS and it worked fine. If you searched for SU-SA3014 there'll be like a billion vendors with this model.
  7. How would we know if this is a port multiplier? And why is it a bad idea? Sorry new here
  8. I suppose my problem is a bit unique. There are no > 2U servers where I live. I've been searching for years trying to find 4U servers, let alone 4U server that's full of HDD trays at the front. You can. However this is a spare, unused mobo and cpu that I have lying around. I reckon the mobo might be cheap now that GPU mining isn't like in 2017 anymore. Mind you that *IF* this is possible, you're looking at a cheap board plus a bunch of 4 sata cards that can hold a (theoretical) bunch of AT LEAST 80 drives. That's bigger (and cheaper) than any other solution I think. Just wanna see if anybody had any experience with this board before I start. I have no DDR4 RAM at the moment, so I'm just collecting infos and ideas at the moment. But thanks for replying!
  9. Hmmm... Would this even work? With 18x 4 port SATA card... Why? Well... Why not! 😂 I don't have a spare DDR4 RAM to test it out at the time being, and I just found this board in my storage cabinet (it's a board for mining during glory days 🤣) B250 Mining Expert. Imagine a consumer board that can take at least 18x4=72 HDDs... The motherboard in question: https://www.asus.com/my/Motherboards/B250-MINING-EXPERT/specifications/
  10. The setting is locked for mine, I don't know why. Used other Win 10 ISOs, same issue
  11. As I understand, changing resolution while viewing the VM in browser (via the connect button in VMM) doesn't work for Windows 10 VM. However, after trying out the new Windows 10 for Remote Session SKU, I can confirm that it works (accidentally!) right out of the box!
  12. That's awesome! Planning to get a Ryzen myself, so this is pretty comforting. But can you fully utilize all cores in VMM? Since DSM sees the machine as having only 2 cores.