Jump to content
XPEnology Community

AllGamer

Member
  • Posts

    222
  • Joined

  • Last visited

Posts posted by AllGamer

  1. If I read you right,

    it sounds like the drive you took out originally still sort of work, but the other 2 drives you had in there just crashed one after the other, while the new drive that replaced the original drive you took out was still syncing up (Repairing) to the original RAID... which unfortunately failed to complete because the other drives died one after the other, leaving you with a broken RAID.

     

    I'm not sure you can recover from that, perhaps some of the other guys that frequent this forum might have other ideas that can help, but personally I'm out of ideas.

  2. No worries, Thank you for the lecture.

     

    I know I'm not expert in this subject, just the general perception of what it feels like when working with ISCSI with the NAS devices that's been available in the market, which as you can tell it has not been pretty due the poor speed performance compared to the regular NFS / CIFS layer.

     

    I've seen complains of ISCSI as well even from people using 10 Gbps network, not just in this forum, but else where as well for other NAS devices.

     

    It appears as you said, only authentic devices specifically created for iSCSI, can take advantage of iSCSI and provide the actual performance that it was supposed to offer, everything else feels like a poor-man's version of iSCSI

  3. For the future planning:

     

    Set up set up 2 servers to mirror each other if you value your data, specially if you use RAID5.

     

    Best practice, never got with RAID5, use RAID6 as the bare minimum.

    RAID10 is the safest, fastest, and relatively "low cost" compared to RAID50 or RAID60 setups.

     

    Even for my hobby home setup, I use a minimum of RAID10 on each mirror server.

    so if a few HDD dies, it's not a big deal

    if the whole server dies, it's not a big deal

    but if both servers decided to die at the same time... well I'm royally screwed.

     

    I've had 3 HDD die on me from the same RAID volume, I'm glad it was RAID10

    they just decided to Expire at the same date, they were purchased in a batch of 12, the remaining 11 drives continues to work fine today many years after the warranty has gone by already.

     

    it's a luck of the draw, once you are out of the warranty period, expect HDD to die randomly.

     

    I always keep a few on had for spare to hot swap for when a disk dies.

     

    That's why that day when 3 died suddenly it caught me off guard, it it was a RAD5 or RAID6 i'd have lost the data.

     

    Actually since that day I added a 2nd server to mirror the 1st server, both running on RAID 10

     

    You never know when a weird struck of luck will hit you with a multi drive death that can even paralyze a RAID10 :razz:

  4. 5 disk, I'll assume that was a RAID5 volume?

     

    is the OLD drive that you replaced still "sort of" work?

    "sort of" work means it still spins up, and can read partial data, it'll probably have a lot of bad sectors, but at least it can still read something from it.

     

    if the old drive is dying, but not completely dead.

    the best way is to use Clonezilla and clone the dying drive into an image or directly to another HDD then take that HDD into the RAID5

     

    then you might get a better chance at recovering the RAID if you can plug back in the "original" drive that was replaced

  5. patching DSM OS is not such a simple thing

    while it's a linux distribution, it's not your typical linux distribution

    everything needs to be compile from the grounds up to make it work with DSM.

     

    and even if you manage to patch NFS 4 to 4.1 the new features you are looking for might not be usable from the user interface, you could always do it from the command line i guess.

     

    that being said, if you want ISCSI to be fast, you'll need a faster network, something along the lines of 10 Gbps network card and fiber cables on both ends.

     

    but if you are going to be spending that kind of money on a 10 Gbps network, might as well use the same money to slap in a proper RAID card + HDDs into the machine you want to use ISCSI, mainly because when ISCSI is enabled, it locks out a big chunk of the storage space just for the ISCSI file (which is the size of the HDD size you choosed to from OS side)

     

    and depending on some implementations of ISCSI it might not be share-able concurrently with other machines.

     

    it's not a true SAN, it's a fake ISCSI, all it does it creates a virtual drive inside a huge file like 500 GB or whatever size you choose your ISCSI device to be, when entering the size from the Control Panel.

     

    I know WD, Qnap, Synology and most other NAS storage does it the same way, only true SAN servers that cost upwards of 10K can let you have true ISCSI which can be shared among multiple machines.

     

    For the same reason, as you mentioned, I just stick with NFS and Samba shares, it works much better at any network speed.

  6. yup, that's what I do when I'm troubleshooting controller / missing HDD issues.

     

    I first disable both USB and eSATA to find and make sure everything is working, before adding the USB back into the config file.

     

    It's a lot less headache to figure out why some disk are not being seen.

     

    most of the time are the Onboard SATA / IDE ports from the motherboards playing tricks.

     

    I noticed most time, even when you "Disable" the onboard SATA / IDE in the BIOS they are still seen by XPEnology DSM and that can causes weird things.

    So, I've learned it's best to always leave them "Enabled" in BIOS, then let XPE DSM sort it out after boot.

     

    That means using the example above, if you planned to use 25 disk (25 SATA ports), and you got exactly some controllers that gives you exactly 25 ports, you will have to add the extra SATA / IDE ports from your motherboard, assuming the motherboard supports 6 SATA + 2 IDE then your actual number should be 31 drives or 33 drives depending how the IDE ports are being treated.

     

    IDE are recognized differently from motherboard to motherboard, sometimes they are recognized as part of the SATA, sometimes properly identified as PATA.

  7. is not that hard

     

    just edit the synoinfo.conf to include 58 disks.

    some tips worth keeping in mind for safety and easy troubleshooting.

     

    keep volumes to the amount of disk per controller, normally 8 disk per SAS/SATA controller,

    unless you purchased the real expensive controllers that worth $1000 those can easily do 16 or 24 disks per controller.

     

    if it does really comes to that, I'll suggest setting up a small volume of 12 disks, since that is the default with XPEnology

     

    once booted up, you can edit the synoinfo.conf to allow more disks, here is a nice reference guy for the DS214play, which defaults to 2 disks

    viewtopic.php?f=15&t=15305

     

    the same "Best practice" works well for the 64bit version XPEnology

  8. yup, i found the same issue as well when I was playing with the DS214play version.

     

    you should be able to edit the synoconfig file to add another card in there.

     

    if your machine can run 64 bit you might be better off running the XPEnoboot 5.2 for DS3615xs, that by default supports 4 NIC

  9. personally, I always think it's a dumb idea to use raiser cards, specially when you are Down-Speeding something as important as a SATA controller.

     

    There is a reason why it was designed as a PCIEx8, downgrading it to a PCIEx1 loses way too much communication links, and rather than data comm, maybe some of the POWER pins needed are in the PCIEx8 design of the card, so that is probably why it's not working.

     

    I know from experience those LSI 9211 can work on PCIEx4 mode (physical PCIEx8 / PCIEx16 slots)

     

    but going from x8 to x1 is probably too much to ask :razz:

    like running a marathon with only 1 leg

    • Like 1
  10. I thin one way they could sell a PC compatible DSM for DIY uses and still keep up demand for there own hardware NAS is to always have the PC DSM be a version behind of the current DSM. so DSM 6 is out now they will Sell DSM 5 for PC. They could also have the base DSM 5 and if you want the advanced features like surveillance station that would be an extra cost.

     

    but... it's already exactly like this right now. :shock::-|

  11. As far as I'm concerned, if you are using the newer software, the alignment is done automatically behind the scene,

     

    parted since the inception of the Advance Cluster drives came out, has defaulted to Megabyte alignment, instead of physical alignment as it used to be in the past before the Advanced drives.

     

    Aligning the partition / format to the megabyte will automatically fit into the proper cluster size.

  12. They know what the license says, but they also know we can't do anything about it; only the copyright holder can. They should really declare war on Synology because Synology ihas been intentionally breaking the license for years, while making a huge profit.

     

    The same goes for about any Chinese company that uses Android as well. None of them follow the rules...

     

    yup!, Synology is actually a Chinese (Taiwan) company according to its registration, then we have ASUS, Acer, Gigabyte, MSI, Razer, Logitech, HTC and ... well pretty much all the big brand names in the PC and Android market are all from Mainland China, or Taiwan China, and it's true they do not follow any of the GPL open source requirements.

     

    It's very hard to get them to cooperate (follow rules).

     

    Samsung (South Korean) follow rules, but they are always behind schedule, you have to ask them many times before they release the source codes for their android devices.

     

    Qnap is also headquaters in Taiwan (at least Qnap have a proper GPL source code page like Samsung)

  13. There is a simpler solution to what you want to accomplish.

     

    Assuming you have only 2 USB port in that computer running XPEnology

     

    you can use 1 of the 2 ports for the XPE boot stick.

    and then plug in a USB 4 port (or more) switch to the 2nd USB port, now you get 4 ports to connect any USB storage you want to add.

    something like this, https://www.amazon.ca/Plugable-10-Port- ... B00483WRZ6

    pick a something like that with the amount of ports you plan to use.

     

    regarding the PCIe/MSATA-to-USB converter, I do not know which chipset will work for sure, check the hardware thread see if anyone had previous experience with the chipset you plan to buy. viewtopic.php?f=2&t=10267

     

    There are lot of other options like going eSATA, or USB3.1, etc... but I'm under the impression you are trying to setup a low cost solution.

  14. What's impossible for Synology is the fact that there are thousands of possible hardware combinations (board, chipset, controllers, nics, etc.). It won't be easy for them to support them all. But what they could do is offer support for "qualified" hardware. Several big software manufacturers work in such a way. If you don't use the "qualified" hardware you won't get any support although you have a support contract which costs hundreds of €uros or $ (SolidWorks is one of these).

     

    VMware is another great example, free fully featured, but when you want HA and other high end features you need to get a license.

     

    Also they only support hardware in their "White Box" list, if it's not supported by them, then you are on your own.

     

    most Small Businesses, SOHO, and Enthusiast runs the White Box VMware with ESXi, and they are still making big profit selling to large companies and data centers.

     

    Synology just need to partner up with hardware suppliers like Lenovo, HP, Dell, LSI, Adaptec, etc...

  15. If it is plug and play any hardware i would pay around $50-$75. Depends on support.

     

    yup, that's exactly what I was thinking as well, on my previous reply when I said no more than $100

    but in reality I was thinking between $50 to $75, with $100 being the absolute maximum if they are greedy.

×
×
  • Create New...