Jump to content
XPEnology Community

Ben0

Transition Member
  • Posts

    5
  • Joined

  • Last visited

Ben0's Achievements

Newbie

Newbie (1/7)

0

Reputation

  1. Bump, I came back to see if there were a solution yet. I still don't understand what exfiltrate and myself are doing wrong? It seems that as soon as the system partition is replicated to new disks it resets the .conf files after reboot. Perhaps (only speculating) the .conf files aren't replicated to the other disks on the system partitions. Then on reboot, it detects inconsistencies and reverts back to the original or thinks it's a new unrecognized installation? From what I understand the system partition is a raid 1 across all disks. I don't think this will work however when I get time (or if someone else can) I will try to edit the files, expand the volume, edit the files again. Or if there were some way to disable the system files check or whatever it does during bootup. I'm clutching at straws here, hoping it might spark an idea for someone else.
  2. I know, you are right, I had RDM for about 2 years working fine until I tried 15 drives in one DSM. In previous post Kanedo said they had success with 13 drives when doing a whole PCIe controller pass-through not RDM. I've given up on having more than 12 drives after reading through pages and pages of threads. Seems some people can get it to work somehow and others can't, it doesn't see very reliable in any case. Especially when there is eventually going to be a xpenology DSM 6.0, will have to re-install anyway being a different file system. Cheers
  3. So I can't passthrough the controller because my hardware doesn't support VT-D, however I've just gone the bare-bone approach, this didn't work in the past because my NIC wasn't supported (Dell 2950), seems to work now in the latest release. This is what happens; 1. Power up, says either moved from another disk station or it has been reset and needs to re-install 2. Re-install, boot, only 12 drives (new install), modify both synoinfo.confg files 3. Reboot, it tries then crashes then restarts again 4. Back to Step 1 At step 2, after i modify synoinfo.conf files, I don't reboot then in ssh go; 1. mdadm -Asf 2. vgchange -ay 3. mount /dev/vg1000/lv /volume1 4. All files there still 5. But the DSM says drives are green - spare The data volume raid is intact, no matter what i do i can't get DSM to recognize it because the synoinfo.conf files keep resetting and only shows 12 disks. I can get a 8TB external drive and backup then start again but they cost $420 here and I want to avoid departing that much money unless i really have to. I'm going to try now to have only 8 drives, install dsm, modify synoinfo.conf, hot plug in the other 7 and see if it recognizes the other disks. What I really need is a boot image and .pat file that has the synoinfo.conf modified already. I looked into it but was unable to re-pack the zimage kernal file. I don't know if this makes any sense but appreciate the feedback Thanks EDIT: I hot plugged in the drives then did mdadm --stop /dev/md(x) mdadm -Asf vgchange -ay mount /dev/vg1000/lv /volume1 DSM now sees the volume, doesn't show in file station though, still says crashed. Under disk info it says "System Partition Crashed" I would expect something like this. I know as soon as I reboot its going to revert back to 12 drives. EDIT EDIT: If I create new shared folders with the same names, the same data is in there. Example creates "Photos", then all my photos are in there. When I go "Repair" in storage manager to repair the system partition, it pops up with a blank message and nothing happens. EDIT EDIT EDIT: Rebooted, and guess what... Yes back to step 1
  4. AllGamer, I'll try that and post results soon BTW i'm using XPEnoboot_DS3615xs_5.2-5644.4.vmdk Cheers
  5. G'day I don't have a solution but I'm also experiencing the same issue as the OP, my setup as follows; ESXi v6.0 15 various sized physical drives mapped to VM (No power supply issues) 1. Originally had 2 separate DSM's inside ESXi, both less than 12 drives 2. This worked no issues whatsoever for a couple of years 3. Ran out of space, decided to combine the two DSM's 4. Modified the synoinfo.conf, this worked no worries - 45 total slots 5. Added 1 more drive (still less than 12), crashes 6. Realized its because i didn't format the other extra drive 7. Formatted, added, works good 8. Added rest of the drives total 15, works when drives are blank 9. Expanded the volume, all still working 10. after reset, it crashes, the extra (12+) drives disappear and the synoinfo.config resets 11. Every time i attempt to modify the synoinfo.config, it resets after reboot Conclusion -More than 12 drives is ok if they're blank -As soon as there is a synology partition on the extra drives, it crashes -From what everyone else is saying this doesn't happen on bare metal or if you passthrough the whole PCI storage adaptor Seems DSM sees the drives as foreign or in a different order when in a virtual environment, I don't know? Question is, where is the original synoinfo.config file coming from, can we modify the original? Is it on the boot image? reclaime.com finds the raid and shows all the data still there, i'm going to try a PCIe passthrough same as Kanedo Hope this info was somewhat usefull to someone
×
×
  • Create New...