Jump to content
XPEnology Community

AntonV

Rookie
  • Posts

    4
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

AntonV's Achievements

Newbie

Newbie (1/7)

2

Reputation

  1. Спасибо. Всё прошло как по маслу, без единого затыка. HP Microserver N54L Gen7 Proxmox 7.0.11 DS3615xs DSM 7.0.1-42218 Update 2 Сделал размер диска 10Гб в скрипте. Потом добавил большой диск. Большой диск - всё окей. А с мелким - отказывается делать на нем volume - говорит, что слишком маленький. Надо мол минимум 10Гб свободного места (уже после инициализации). Пришлось его обмануть из shell. Создал партицию fdisk-ом, тип fd. - это у меня получилась /dev/sdg3 Создал рейд1 из одного диска. mdadm --create --level=1 --force --raid-devices=1 /dev/md3 /dev/sdg3 --metadata=1.2 Потом pvcreate /dev/md3 vgcreate vg2 /dev/md3 lvcreate --name volume_2 -l 100%FREE vg2 mkfs.btrfs /dev/mapper/vg2-volume_2 reboot После чего DSM находит это как available pool, и его можно "включить" и появится маленький том /volume2 на 5гб. Использую его для пакетов и homes. и entware если он есть для 7.0
  2. I had similar issue with having SCSI LUNs in Synology's proprietary ADVANCED format. I had my RAID1 broken. I restored it manually, but before tried to extract data. The only tool I found which says it can deal with Synology LUNs is Reclaime and Reclaime Pro. It didn't help me with Advanced luns, but right now their chief developer is analyzing whats wrong with my setup and trying to update their software. Try it - it may work in your case.
  3. I restored my /volume1 myself! Huray! Okay, so, after week of investigations I tried different tools to extract data. And here whats happened. 1) The only tool which states it can extract data from Synology LUNs is Reclaime and ReclamePro(which is 4 times more expensive). However, I tried and it failed. I was talking to their support and one of their senior developers started investigating, why its not working (and doing it still, they are very interested in this case). 2) What is funny, i actually could restore it a week ago if kept cold blooded :) (Actually no, i didn't know one command line option) So, previously I already disassembled my 2-disk RAID1 with ``` # Stop the RAID1 mdadm -S /dev/md2 # Recreate a RAID1 with just one disk keeping the data as is mdadm --create --assume-clean --level=1 --force --raid-devices=1 /dev/md2 /dev/sdd5 ``` At this point i stopped last time, since LVM couldnt find volume groups and volumes... Now to the happy finish: ``` #Stop the RAID1 again mdadm --stop /dev/md2 # Recreate RAID1 with new option "--metadata=1.2" mdadm --create --level=1 --force --raid-devices=1 /dev/md2 /dev/sdd5 --metadata=1.2 # Reload LVM configuration from backup, where vg1000 is the name of my volume group vgcfgrestore --test -f /etc/lvm/backup/vg1000 vg1000 # that was dry run, to check that its okay, now real run vgcfgrestore -f /etc/lvm/backup/vg1000 vg1000 #Output: Restored volume group vg1000 # check it is in the list now vgs -v # finally make it active. vgchange -ay vg1000 # after this my /volume1 is restored as RAID1 mirror with only one HDD. # but DSM is not picking it up properly - reboot is needed. ``` After reboot all work fine. Storage manager shows my raid, however is has 'Failed system partition'. Its an easy fix for me. Looking at the disks with lsblk its seen /dev/md0 (system raid1 ) doesnt include partition from my hdd ( /dev/sdd1 ). ``` mdadm --manage /dev/md0 --add /dev/sdd1 ``` And all is okay after few minutes.
  4. Hi, some time ago on my xpenology 6.1.5 i got RAID1 crashed due to power failure. then I sure did the most stupidiest thing - didn't made a backup of hdd before trying to recover raid... and after week of trying different things i have following: 1) /VOLUME2 - single 2tb HDD used for non-vital stuff - working okay. i even backed up almost all important data from RAID1 - it was in 'readonly' state for a while :) 2) /VOLUME1 - crashed two RAID1 hdds. disk A and disk B. - disk A is almost okay, but says system volume crashed, - disk B - after trying to reassemble raid to make it think it contains one drive only - i ended up with 'recreating' raid and clearing superblock. (I wanted to reassemble with mdadm -assemble -disks=1, but instead made --create, so it cleared superblock :( ) now synology doesnt see volume1 at all. and shows raid is crashed. Now the need - i have 3 iSCSI LUNs - one non growing - and i just took the image file from @iSCSI folder and successfully copied all the data from inside - two more - looks like they are in strange ADVANCED format and growing-style. I cannot understand how to mount them or copy (ls shows unreal file sizes like 100 GB). I tried to check with 'btrfs --show' but it doesnt recognize any filesystem inside. I have really important data there, how can i copy or mount those stange LUNs? p.s. i had no spare drive to add to rebuild raid classically by adding new disk - and also was worried about stressing existing drives - both are 45.000 hrs+ online.
×
×
  • Create New...