Jump to content
XPEnology Community


  • Posts

  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

George's Achievements

Advanced Member

Advanced Member (4/7)




Community Answers

  1. 1: MB: Gigabyte+GA-B250M-D3H 2: Linux/Debian to be specific. 3: D@mmmm hot. at the moment, xPenology allows me to configure a "suspend" time and then some how injects a "wakeup" time, which I assume is actually talking to the BIOS via some generic/industry standard calls. (I would not assume xPenology had HW specific calls, very every MB). When xPenology was doing this I can confirm my machine was in suspend mode, it was "dead" no fan or anything moving or turning, and powered itself back up at the scheduled time. G
  2. Guys With xPenology under Power Management I'm able to push the NAS into a suspend/sleep mode, and then wake up a pre determined time again. Any chance anyone has access to the code that does this, how is it accomplished, mean the actual talking to the HW. Need to replicate on another machine. G
  3. Hi all, busy with some serious rebuild of my NAS, as some might have noticed things went seriously south. MY docker repository sits on Volumegroup 1 (I run Unify Controller as a docker deployment, thats probably the most important, new is Plex serve thats just a install, but need to figure out how to backup that indexed files also, in case I need to totally wipe/rebuild). looking at everything that needs to be done, how can I backup my containers, and offload them (backups) to a off system, so that in case I need to, they can be restored after a complete rebuild, which might include a loader upgrade and complete new DSM build. Haven't done this before G
  4. did some re-reading after 2 cups of coffee, things clearing up a bit, hazy with my older 1.02b loader on the usb. - see how you say move HDD's and USB together, where are my apps installed, where is the configuration all saved. and then the 2 VG's Volume 1 (which does contain data I don't want to loose) based on a M.2 and the 5 problematic drives in volume 2, which it looks like I will destroy and rebuild. is there a how to do the loader / DSM upgrade, without loosing vg1.. and my apps installed. G
  5. ... TMI ... if i can break it down, for myself, if it's a compatible MB, as long as drives goes back into same SATA slots numbers, and I use the same USB, all should just work ? changing loader, well that starts sounding like a complete rebuild then, and "probably" loosing everything on my current setup which is something I really don't want. but at the same time now might be the best time to get the loader and DSM upgraded, while wishing not to loose anything on my volume 1 DG. G 😅
  6. ok. just a crap load of copying, took we good part of 4 days to get data off. G
  7. "You can't repair a RAID5 array with less than the full complement of drives. You must have 5 drives to rebuild, period." this have to have 5 drives in Raid 5 would be a DSM impose minimum as it's not a Raid5 technology limitation. With this 5 drive Raid 5 limit, it sort of then puts a stop to my idea to rather re-create the disk into to VG's. one as a mirror pair and the 2nd as a 4 disk VG. OR... are you saying as this VG was originally 5 drives, it now has to stay 5 drives, I can't have it reconfigure/rebalance itself into a 4 drive Rad 5 VG. ok, going to pull this drive out, zero the original drive and see if i can get it to rebuild. G
  8. so if I click Raid Group, Manage, Repair, it then only allows me to pick the one drive currently not being used, when that drive (used drive 4) is not there, then the entire manage and repair is grey'd out. On Overview, I can see repair, and this is what is then shown, with no further options. The volume group was 5 drives, last time I hit repair it went crashed right after, It's as if I need to get the VG rebuild with the 4 good drives, remove the 1 drive, and once healthy... then extend it by adding the 5th drive back. G
  9. guys. please confirm. My understanding below is that drive 4 is unused, I can remove it from the machine ? issue... my 14.54TB size, that was based on the 5 drive configuration, how do I then tell DSM to rebuild based on the 4 drives ? G
  10. ... Update... So I've been able to copy all my data off the NAS. can either now try and fix the Volumegroup as it is/inplace. by removing reading the problem drive... or destroy and rebuild volume group into the 2 above Volume groups. Although to do the above proper I'm going to move the one VG onto a separate pic based controller, which I need to order and have a 2-3 day lead time. Thinking a 2 HDD group and a 4 HDD group, comments. G
  11. thanks thanks. know Raid 5 well. so in essence it's a SW based Raid 5 with this smallest disk (run per physical disk) usage. ending in creating multiple raid 5 groups, the groups then added together into a volume group. As for is something else maybe wrong, been wondering that way also. thinking is first thing is possible the data ports... It was really mis behaving last night when I was trying to copy the photos, volume would crash, I would restart unit, and it come back degraded, as soon as I started trying to copy again it would crash. leading curious me click on logs for driving crashing and it was showing plug in plug out plug in plug out... resulting m taking unit down from where it normally stand, opening up and reseating all cables. it's been running stable since then. firs priority for now is to copy all media off the unit. Will then go into how to fix it. Adding some new data controllers is low impact, I only have 2 volumes, Volume 1 is a cache volume located on a M.2, Volume 2, this one is the problem, so if rebuilding it's easy to move this one... my docker builds, Plex cache/library it's all on V1, unaffected. If this is deeper, like the MB... then worried, how do I replace the MB and keep my build/My NAS configuration. This is where I'm stick my hand up and hope someone can assist. Still wondering about the better drive configuration to go for, V2 at the moment was build with 5 x 4TB drives, I had the 6th drive space, in waiting, allot help it was thinking of future rather going with 2 Raid5 groups, one as a mirror, ye thats really Raid 1 out of 2 x 4TB -> for documents/photos etc, as a new volume3 and then the remaining 3 (or 4 if I get an data controller) all reconfigured as Raid 5 into Volume 2 so that all the programs still find that media where it was expected. Moving the photos/documents is a low impact change that won't touch anyone, its just SMB shared out to one machine from where we copy onto the NAS. the media is shared via the Plex and iTunes server so don't want to go reconfigured that, and have to re-index everything. (indexes are save on V1). I'm just talking... still nervous, only had 4 hours sleep. working on this, and that included fighting with mosquito's and my son crawling into our bed mid night. G
  12. for my education. 2 x 6TB drives will give me 6TB useable, 3 x 4TB drives will give me 8TB, as the 3rd is used as parity/block copies on a round robin fashion ? 4 x 4TB will give me 12TB ? approx 5 x 4TB will give me 16TB ? approx thinking about the rebuild, to contain the blast radius if something happens again, will rather go with 2 volume groups, one for documents and photos, the amount of space required is approx 2 TB, so 2 x 4TB will be good enough, at the moment that leaves me with 3 IO slits into which I can fit 3 x 4TB drives, will need to get a pic card with additional ports, but looking at the doing a 3 - 5 drive volume. G
  13. ... took system off rack where it was, opened up, reseated allot of the cables, for now stable... still degraded state, but not going to touch that now. busy copying as much as fast as I can. G
  • Create New...