All Activity

This stream auto-updates

  1. Past hour
  2. hw? This quad-port NIC is brand new. But I'll check that too. Thanks
  3. Today
  4. Yes my current nas is backed up. I planned on doing a new install. My biggest challenge right now is how to attach the raw disk and use them. From what I gather if all goes well with that hookup then dsm should be detected along with my data intact. Being that I do have everything backed up, is there any real benefit in doing the raw disk (I am highly unfamiliar with this) vs creating 2 vdi drives 1 on each HDD and attaching them that way?
  5. Sorry but one more question in case everything fails. I use hyper backup and have completely backed up my server (very important stuff) just in case of a problem. I just want to be sure I understand what to do if for some reason everything goes wrong. On the new server completely start from scratch and install xpenology again and then use hyper backup to restore the new server to how the old server was. Just nervous about losing my data. Hope that makes sense. Even confuses me.
  6. you cant add cam license the normal way by purchasing through synology for a xpenology box, the "legal" way is to use a real dsm box and share the license to a xpenology (the alternative might be a hacked versions of SS but beside legal trouble and finding it there is also a risk of adding more to you xpenology then just SS - aka malware/virus - it would be kind of easy to add that stuff to a unsigned *.spk file) there is also a older version of SS having a littel more free cam's by default (look in the tutorial section for this)
  7. maybe this is a starting point? https://docs.oracle.com/en/virtualization/virtualbox/6.0/admin/adv-storage-config.html or jus goolge for "VirtualBox Raw Hard Disk" or similar
  8. there are som differences in the default drivers from synology, like newer lsi sas drivers, newer mellanox 10G nic drivers but the kernel with dsm 6.2 is the same for both with 7.0 there will be more difference, 3617 get 4.4 kernel like 918+ has and 3615 stays on 3.10, kernel, if that is of importance depends on what loader(s) we might see for 7.0 (atm 6.2.4 and 7.0 are off limit with loader 1.03c/1.04b) did you check that the hardware of the 4port nic is reliable? maybe boot up i live linux and copy some data (that way anything hardware related is the same as with d
  9. Look into a "The noob lounge"....subforum
  10. Does not appear supported. https://kb.synology.com/en-uk/DSM/tutorial/How_to_migrate_between_Synology_NAS_DSM_6_0_and_later
  11. would that be the same process to also move to a virtual machine? I want to move my physical to an oracle VM. Im having trouble understanding how to connect my 2 drives to point to the vdi
  12. I would be moving from a DS212+ model (2-disk with single core ARM processor I think). What would a migration install do? Can I stick the fully loaded 2 old Synology disks into the new machine for an "Migration Install"?
  13. IMHO there is no reason not to do this in DSM; Ubuntu is not helping you. However, syntax is inconsistent between your diagnostic/recovery commands. Start with that.
  14. All the current J-series motherboards should work.
  15. Assuming you are moving from a supported Synology unit (generally a + or xs model), yes. The 32-bit (ARM-based usually) Synology models have a smaller DSM partition and they cannot be directly integrated. However: Importing a Synology array into an existing XPe system can be difficult (volume and storage pool collisions are the most significant issue) but is possible. You may have to use some "advanced" ssh commands to get the array fully integrated and recognized. It may be easier to just do the XPe install using the Synology disks and it should offer to do a "migrat
  16. I have 2x4TB in a Synology NAS box. I would like to transition all onto a Xpenology machine and keep the data on the disks. Is it possible to boot Xpenology and then put the 2x SYnology HDs into the Xepenology machine?
  17. @Kamele0N would you be so kind to link it? I did so much digging last night (more specifically looking for posts on this, this week) but came up empty handed. Im really excited for my new machine to arrive but trying to figure out how im going to migrate this is stressing me out. (I had an incident a while back where i nearly bricked my machine)
  18. Hello. Broken write and read cache in the storage. There are disks but writes that there is no cache and the disk is not available. Tried it in ubuntu: root@ubuntu:/home/root# mdadm -D /dev/md2 /dev/md2: Version : 1.2 Creation Time : Tue Dec 8 23:15:41 2020 Raid Level : raid5 Array Size : 7794770176 (7433.67 GiB 7981.84 GB) Used Dev Size : 1948692544 (1858.42 GiB 1995.46 GB) Raid Devices : 5 Total Devices : 5 Persistence : Superblock is persistent Update Time : Tue Jun 22 22:16:38 2021 State : c
  19. Thanks For some reason I keep on forgetting that migration does not necessarily have to go from one model to a newer one. The only difference in this table (AFAICT) is the max number of CPU threads. Since my CPU is 4c/8t, this won't be limiting. Now, I wonder why the NIC would be problematic on a ds3617xs rig and work fine on a ds3615xs rig, but I guess it would be easier to migrate and see what happens than to actually troubleshoot the issue with the ds3615xs note: so far the test with the alternative NIC (single port GbE) still didn't lead
  20. Если строить SHR изначально, можно добавить все диски, при последующем расширении только такого же или большего размера.
  21. Прверил - нет, настройки SHR в порядке, но диски все равно не дает добавить... Пока сделал второй SHR том, потестирую диски, а потом забекаплю, наверное, все и сделаю нужный мне SHR из всех шести дисков с нуля.
  1. Load more activity