Jump to content
XPEnology Community

Diverge

Member
  • Posts

    355
  • Joined

  • Last visited

Everything posted by Diverge

  1. Anyone ever see their read transfer speed just die for some reason? It just happened to me. I looked at esxi performance and saw CPU was pretty high, but in DSM no processes using much cpu. In this pic you can see the high VM cpu usage correlates with the drop in read transfer speed. Then the cpu drops off, and my read speed come back to normal All the drives are on LSI 9201-8i, using gnoboot 10.2 w/ VTd (passthrough). Drive 5 is setup as esata for backup purposes. edit: you'll also notice that while reads dropped off a ton, writes were still going at decent rate (even though they dropped some too)... is data getting buffered or something?
  2. Slap in the face? No one owes you anything. If you have any expectations, buy a real synology. It's only then you can expect any sort of service, or have a right to complain.
  3. try play with /etc.defaults/synoinfo.conf internalportcfg= esataportcfg= internalportcfg + esataportcfg should = 0xfffff you may try one by one which bit correponding to that particular port you want to set it as esata port Thanks, I'll have to play around to try to understand how it works on my test system (soon as I put back together). The values on my real system are as follows: usbportcfg ="0xf00000" esataportcfg ="0xff000" internalportcfg ="0xfff" does each bit equal a port? so in order to change 5th port I make values to this? usbportcfg ="0xf00000" esataportcfg ="0xff010" internalportcfg ="0xfef" edit: Yay, It worked! At first it wasn't working (doing all tests with virtual disks), then I realized my 5th disk wasn't partitioned or formatted. After that it showed up in external devices
  4. eSATA question: Is it possible to designate a specific sata port as esata port? I have a 5 bay hard drive enclosure, connected to a LSI 9201-8i in passthrough (esxi 5.5 system). 4 drives are part of my DSM array, and i'd like to use the 5 slot as a backup slot if I can somehow get DSM to recognize it as esata.
  5. I don't have an answer on what to do for you, but I'm going to guess that DSM is saying your array is crashed is because the volume group config doesn't match your array anymore. Again, not sure I know what I am talking about But when I was expanding my array (swapping in bigger disks one by one) within DSM, and letting it do it's thing, it messed up on the last disk and said my array was crashed. To fix it I had to manually restore a previous vg1000 config from /etc/lvm/archives/ that matched the setup of 3/4 disks when the system last worked correctly. Once I did that I booted into degraded array, and could add my 4th disk to repair. But I dunno if you'd have the correct backup config since you did things manually from shell, as I'm not sure what process creates the backups (DSM?). best bet is to do what you're doing, backup your data and start fresh.
  6. Why not use another OS, like linux? I'm currently backing up my data to a USB drive that I formatted ext4 in DSM (taking forever at 35MB/s). If I redo my synology OS I plan to just create a new array, power down, move the array to a linux VM (along with my backup drive), and then copy all the data to it - taking DSM out of the loop. The reason I am doing it this way is because when I copy my data from the backup drive, it will be connected via sata rather then slow ass usb - DSM won't just let you copy off another sata drive, unless it's estata, which is the reason I'll use linux. This is a situation where having esxi is priceless. with a few clicks you can move disks from one VM to another
  7. Also, does anyone else see orphaned inodes at boot? Is this normal? I seem to get 1 every boot.
  8. After recovering my volume today I did some testing with gnoboot_10.2 and VTd passthrough of my LSI 9201-8i. I seem to be getting pretty decent read/write speeds just copying files from folder to folder in DSM. Pretty much ~150MB/s + on both read and write at same time. The first 3 peaks are 3 smaller files, then I found a 20GB movie to copy, which is the last peak.
  9. The shutdown script should be a follow: syno_poweroff_task -r poweroff If you directly call poweroff, after reboot it will a notification for improper shutdown. interesting. I had been using poweroff whenever shutdown from DSM hung, and sometimes if I was just in console and too lazy to open DSM. I never recall any improper shutdown notifications though.
  10. I'd also prefer if gnoboot was that way. All in zImage makes it impossible for us to edit anything in rd.gz.
  11. after all day of playing around, i think i've figured it out and in the process of fixing my array of new disks that failed on last disk swap (data is on my old array, after fixing it... but DSM was complaining of missing inodes or something like that in the console, probably cause data changed while in process of swapping out disk by disk.. so didn't feel safe using it). in /etc/lvm/archive are all different copy of the lvm config at different stages of changes to the array. i had like 8-9 of them.. after looking at them all, and trying to narrow down the config that last worked before the volume was crashed i thought i found the correct one. while searching the internet i found this page on fixing LVM raid arrays, and found the command I needed. http://www.novell.com/coolsolutions/appnote/19386.html vgcfgrestore except it wouldn't let me restore the files from the backup folder path, and seemed like it was hardcoded to only work with the /etc/lvm/backup/ folder. so i renamed the config there, vg1000 to vg1000.old. and copied the config I thought might fix it to the /etc/lvm/backup/, renaming it vg1000 then I did : vgcfgrestore vg1000 it said it was restored, or something like that. so I rebooted the system (leaving out the last disk in the array I swapped out/in), it booted to a degraded array. I added the disk, and an in the process of a repair edit: gnoboot, if you read this. Since my array was fucked (prior to my last boot), i figured i'd move to gnoboot 10.2 and try passing through my lsi 9201-8i card (was getting tired of deleting and adding RDM devices). so far while repairing my array I'm getting 103MB/s edit2: now 118MB/s
  12. thanks. looks like i'll be forced to that choice. after swapping out last disk the volume shows as crashed in DSM with no option to repair. I did some digging, and all md partitions there, but missing logical volume and volume group. put back old disk, the same. going to revert to my old array (hope it still works), backup all my data and start fresh edit: looks like re-inserting old disks has issues too. 1 disk was missing from array, booted into DSM and it listed volume as crashed, but gave a popup box saying raid was disassembled, and wanted to run scan at reboot... picked that option. now just gotta wait and hope it fixes itself for whatever reason, now only 1 disk is listed in md0 edit2 think i figured out why. because it's booting off last disk that was left from new array, that now has new disk volumegroup data on it. I removed that disk, and booted. dsm still said it was crashed, but my volume was mounted in terminal and could see data... going to see if I can reverse order of disks and get it to boot from the first disk that was swapped out rather than the last. edit3: couldn't fix it from DSM machine. Moved all 4 original disks to linux machine, it automatically booted with my array there, but 1 disk wasn't in the array for whatever reason. added it back w/ mdadm /dev/md2 -a /dev/sdc5 and it added. now it's doing a recovery... *crosses fingers* hope it works If it works I'll backup the data off it, then dd the sdx1 partition from the first disk that was swapped out to the other 3 drives so they all have same data there.
  13. I've started the process of swapping out the disks one by one and repairing. I'm up to the 3rd disk now, but have a couple questions. I noticed that I now have a md3 now, and that my new disk are physically partitioned to match the older 2TB disk, and the balance is in a 4th partition that becomes md3. When the process is complete will I always have my 1 volume split between 2 partitions per disk? or will it resize at the end and leave me with just md0 (dsm), md1 (swap), and md2 (volume1)? sda,sdb,sdc new 3TB drives, sdd old 2TB: DiskStation> sfdisk -l /dev/sda1 256 4980735 4980480 fd /dev/sda2 4980736 9175039 4194304 fd /dev/sda5 9453280 3907015007 3897561728 fd /dev/sda6 3907031104 5860519007 1953487904 fd /dev/sdb1 256 4980735 4980480 fd /dev/sdb2 4980736 9175039 4194304 fd /dev/sdb5 9453280 3907015007 3897561728 fd /dev/sdb6 3907031104 5860519007 1953487904 fd /dev/sdc1 256 4980735 4980480 fd /dev/sdc2 4980736 9175039 4194304 fd /dev/sdc5 9453280 3907015007 3897561728 fd /dev/sdc6 3907031104 5860519007 1953487904 fd /dev/sdd1 256 4980735 4980480 fd /dev/sdd2 4980736 9175039 4194304 fd /dev/sdd3 9437184 3907015007 3897577824 f /dev/sdd5 9453280 3907015007 3897561728 fd /dev/md01 0 4980351 4980352 0 /dev/md11 0 4194175 4194176 0 Error: /dev/md2: unrecognised disk label get disk fail Error: /dev/md3: unrecognised disk label get disk fail DiskStation> DiskStation> pvdisplay --- Physical volume --- PV Name /dev/md2 VG Name vg1000 PV Size 5.44 TB / not usable 3.38 MB Allocatable yes (but full) PE Size (KByte) 4096 Total PE 1427328 Free PE 0 Allocated PE 1427328 PV UUID UDpJBS-unrB-wzi8-c52i-mqd2-54e1-SWpsVL --- Physical volume --- PV Name /dev/md3 VG Name vg1000 PV Size 931.49 GB / not usable 2.38 MB Allocatable yes (but full) PE Size (KByte) 4096 Total PE 238462 Free PE 0 Allocated PE 238462 PV UUID YFpKHI-4UDi-8J2M-y7s4-CRrI-Stlb-1xJ2oi DiskStation> lvdisplay --- Logical volume --- LV Name /dev/vg1000/lv VG Name vg1000 LV UUID RD3nVc-LiPu-CsGZ-ZqA1-K4LA-OD72-GshZUa LV Write Access read/write LV Status available # open 1 LV Size 6.35 TB Current LE 1665790 Segments 2 Allocation inherit Read ahead sectors auto - currently set to 4096 Block device 253:0 DiskStation> vgdisplay --- Volume group --- VG Name vg1000 System ID Format lvm2 Metadata Areas 2 Metadata Sequence No 4 VG Access read/write VG Status resizable MAX LV 0 Cur LV 1 Open LV 1 Max PV 0 Cur PV 2 Act PV 2 VG Size 6.35 TB PE Size 4.00 MB Total PE 1665790 Alloc PE / Size 1665790 / 6.35 TB Free PE / Size 0 / 0 VG UUID a7L11z-Pukv-f052-tv0N-CdNP-qQou-SFjIjC DiskStation> DiskStation> cat /proc/mdstat Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] md3 : active raid5 sda6[1] sdb6[0] 976742784 blocks super 1.2 level 5, 64k chunk, algorithm 2 [2/2] [uU] md2 : active raid5 sdc5[6] sda5[4] sdd5[2] sdb5[5] 5846338944 blocks super 1.2 level 5, 64k chunk, algorithm 2 [4/3] [uUU_] [==============>......] recovery = 70.9% (1383486592/1948779648) finish=121.0min speed=77818K/sec md1 : active raid1 sdd2[3] sdc2[2] sdb2[1] sda2[0] 2097088 blocks [12/4] [uUUU________] md0 : active raid1 sdc1[3] sda1[1] sdb1[2] sdd1[0] 2490176 blocks [12/4] [uUUU________] unused devices: DiskStation> DiskStation> pvs PV VG Fmt Attr PSize PFree /dev/md2 vg1000 lvm2 a- 5.44T 0 /dev/md3 vg1000 lvm2 a- 931.49G 0 DiskStation> lvs LV VG Attr LSize Origin Snap% Move Log Copy% Convert lv vg1000 -wi-ao 6.35T DiskStation> lvs --segments LV VG Attr #Str Type SSize lv vg1000 -wi-ao 1 linear 5.44T lv vg1000 -wi-ao 1 linear 931.49G DiskStation> I have my doubts that at the end of the process it will automatically resize md2, instead of using the extra space for md3 as it did. Is there any way to manually fix this w/o breaking DSM? I just want my data volume on all one physical and logical partition.
  14. I'll be trying out alpha 10.2 and a lsi 9201-8i (IT mode) via passthrough in the next day or so. Swapping out new disks one by one and repairing array to copy current system to new array. Only 1.5 disks to go. They take like 12 hours each
  15. why not just by a cheap LSI HBA on ebay? That's what I did after wasting my mother on other cheap controllers. Supposedly the LSI 92xx series are proven for VTd, if you want to use passthrough. http://www.ebay.com/itm/LSI-Internal-SA ... 3cddd62ac4 That listing is from the same guy I bought mine from. It's basically an LSI 9211-8i in IT mode (it can only do IT mode, has no flash). If you end up going that route, you can pick up cheap cables from monoprice... assuming you're in the US.
  16. Though, I'm running AMD Turion X2. Some tutorial how to update? I don't think we can do it, unless you know how to build kernels. rd.gz is inside the zImage (for latest revs of gnoboot), and everything i've read implies you need to be able to rebuild the zImage if you make changes to files in the ramdisk, especially if it changes the size.
  17. Copy the new VERSION file to /etc and /etc.defaults in rd.gz, it should work I guess. how do we go about that with gnoboot, since rd.gz is packed into zImage? edit: fdisk -l zImage shows me partition info. partition starts at sector 15. I did the following, but can't mount the loop device, it keeps saying unknown filesystem, even if i use -t vfat option in mount command. losetup /dev/loop0 zImage -o $((15*512))
  18. Nope, they are 2 different things. small updates come as deb packages, and don't increase build number. new builds have a new build number (4458), and comes as a pat file.
  19. for serial, just add to grub, like it was in prior gnoboot releases: kernel /zImage sn=B3JN00310 I'm not sure where the mac address comes from.
  20. I used to own a DS411+. Loved it, but it was underpowered for Plex transcoding of movies. Sold it and built my own system with Intel thin-ITX motherboards, first with a DG61AG and i3 2120T, then a DQ77KB and i7 3770T
  21. What commands do you run to backup the partition tables? I think I'll do nye same before expanding. http://www.ducea.com/2006/10/09/partition-table-backup/ And apparently GPT partitions can't be backed up with sfdisk. Gotta use a different program http://askubuntu.com/questions/57908/ho ... to-another My new array is GPT but I haven't backed them up yet. They are in my test system until I migrate all the data off my real system.
  22. I'll let you know in a couple days. Just ordered another WD 3TB Red NAS drive. I'll give it a try on my test system that currently has 3 x 3TB by adding the 4th disk to the array. A while ago, way before gnoboot or xpeneology existed, I was using qnology. I added a disk to expand my array and it blew out my volume. Took a long time of research and help from folks on the internet to get my data off. Basically that data partition got deleted off each disk. I've since expanded an array of a synology VM (wasn't gnoboot), but now I make backups of my partition tables just in case something goes wrong
  23. Why modify grub menus to default to install option? You only use that option once, then use normal boot after the install. How is your install failing? Error message? If it appears to go through install fine, but reboots back to install again, probably because you changed grub to always boot into install mode
  24. nice! I just tested this out in alpha 10. Was at DSM 5 beta Upgrade 1, downgraded to DSM 4.3 3827 and verified my test files were still there. Then upgraded back to DSM 5 beta and verified my files still there again. All good
×
×
  • Create New...