Jump to content
XPEnology Community

flyride

Moderator
  • Posts

    2,438
  • Joined

  • Last visited

  • Days Won

    127

Everything posted by flyride

  1. This leads to a lot of questions, but you did not answer the others I asked yet: What DSM version are you on now? What loader version? If this is your system pre-upgrade, did you always have a virtual disk with nothing partitioned on it? Are you trying to upgrade your system while you are having a disk integrity problem, or is this something that just happened during the upgrade? In your VM, you should have one SATA controller with the loader vdisk attached to it (0:0), and a second SATA controller with the 50MB vdisk attached to it (1:0). Correct?
  2. Oh good! So let's just make sure we are on the same page, you installed a new DSM to a new drive, and then added your two array drives to it. The new drive is "unused" (but has DSM on it) When you installed your array drives your Volume came back. And you probably got the System Partition error and you clicked "Fix" and now it looks all green like you just posted, right? If that is all true, shut down the NAS and remove the new/unused drive, then boot back up and make sure everything looks good. If it does, then shut down again and you can relocate your two array disks to whichever ports you wish.
  3. All that is normally required is a clearing of the partition table (i.e. delete the three partitions).
  4. Not exactly, but we can get there. If you are up and running on your old DSM, then we should be able to resolve whatever problem you had that was preventing you from upgrading. What DSM version are you on now? What loader? Please post snapshots of the Disk Manager Overview and HDD/SSD screens.
  5. So you did a clean install to 6.2.3 and a VM that was working, and you have ADDED an LSI card and tried to boot up with no reinstallation of your loader. Right? If so, DSM is having to choose between the functional 6.2.3 DSM on your virtual disk, and whatever broken DSM is on your array. And it is choosing the array and ignoring the good DSM install on your virtual disk, realizing that it isn't valid mid-boot and then offering to migrate/upgrade. You have a conflict and some options. First, did you build a Storage Group and volume on your virtual disk? If so, that will conflict with what is already on your array. So you want to boot up the virtual disk without the LSI and then delete the volume and storage group off the virtual disk. Do you have DiskIdxMap and SataPortMap already from your previous setup? If not, you need to figure out how the controllers are mapped. You (should) have three different disk controllers. You have a hot pluggable array, so unplug all your disks (just unlatch them and allow them to spin down) and boot your VM back up with the LSI in passthrough and look at Storage Manager. What is the arrangement of the disks and slots? You MIGHT see the loader disk, you SHOULD see the VM disk. Take your #1 hot plug drive and temporarily replace it with a blank HDD. What slot does it show up as? Record all this information and then report back so we can calculate DiskIdxMap and SataPortMap or report the values you are already using. The issue you are encountering with the boot selecting the wrong DSM can be addressed by changing DiskIdxMap once we know exactly what it is doing.
  6. What were you using to boot the new VM? A virtual disk? What did you do with that virtual disk when you connected your LSI card?
  7. I think I understand that you are saying you built up a new loader and a test VM under ESXi on 6.2.3. If you were able to do that successfully, you override the loader default during bootup, or used an ISO that already had the "third" grub entry (ESXi) selected. Now I think you are trying to apply a new, clean 1.03b loader on your VM for migration, and you are receiving error 13. This is usually because you missed the step to override the loader default like you did in your test. It happens quick, so you might want to set the VM boot option to enter the BIOS configuration first to make it easier to hit the down-arrow key to select the ESXi boot option.
  8. # cd /dev # mount synoboot1 /tmp/synoboot_part0 Now if your synoboot doesn't exist because you need to implement FixSynoboot.sh, that's an entirely different problem. # ls /dev/synoboot* /dev/synoboot /dev/synoboot1 /dev/synoboot2
  9. Install FixSynoboot.sh on your current system, then write new clean 1.03b DS3617xs loader, reboot your machine and do a migration install selecting DS3617xs PAT file. This has some risk to your system configuration if it goes wrong, but not your data as long as you don't select an installation option to clear your data disks. It would be a great candidate for a full backup of your data and/or simulation of the migration on non-production installation. Note that DS3617xs platform has no hardware transcoding or NVMe features, so be sure you are not relying on those services, such as NVMe cache.
  10. Not a hardware limitation, it's a kernel compile-time parameter. Until such time as we can compile a Synology kernel, there is nothing to be done about it aside from selecting DS3617xs for more threads, or other platforms for features. https://xpenology.com/forum/topic/13333-tutorialreference-6x-loaders-and-platforms/
  11. winscp cannot do this job for you in entirety. The information above won't do it either. I updated post #2 with some discrete instructions. https://xpenology.com/forum/topic/28183-running-623-on-esxi-synoboot-is-broken-fix-available/?do=findComment&comment=141688
  12. You keep asking questions that I JUST answered that make me think you aren't willing or able to follow the steps. If you do a clean install, your data drives are NOT in the system. You add them after the install and step #5 overwrites the broken DSM install they have now. Therefore it doesn't matter what version they are on, which is the whole point. Given that you are having trouble with the install in the first place, I strongly suggest you follow the steps and get ANY DSM install working before you do anything else.
  13. You can clean install 6.2.2 the same way you were going to install 6.2.3, yes. The recovery solution listed above will work with any 6.x version.
  14. I'm afraid there has to be an installation error somewhere. You were working with 6.2.2 so there isn't any reason your hardware cannot be supported. And there is a posted example of a successful upgrade from someone with your identical hardware. I'd go over the process very carefully and make sure nothing is being missed. You should feel confident experimenting since your data drives are safely away from the process. If you do think your hardware is causing problems for some reason, first try a different USB flash drive (and update VID/PID accordingly) and consider ordering a Intel CT PCIe NIC for $20 to make sure there isn't anything weird happening with your network hardware.
  15. This script has evolved from a fairly simple "push" of the descriptive text into the Synology libraries overriding the default DSM text for the native CPU. Now the patch script is trying to use hardware information exposed in Linux (/proc/cpuinfo) to update not only the CPU description but the number of cores and hyperthreads. However, this is a flawed approach, because DSM has a thread limit for actual hardware. DS916+/DS918+/DS3615xs support a maximum of 8 threads of any type. This can be 8 physical cores, or 4 cores and 4 hyperthreads. But /proc/cpuinfo will not enumerate more than 8, regardless of how many cores and threads the CPU has. If you have an 6- or 8- core system, you will get the best performance by DISABLING hyperthreading with these DSM images so that all your actual cores are in play. DS3617xs supports up to 16 threads of any type. This can be 16 physical cores, or 8 cores and 8 hyperthreads. The same limitations apply, if you had a dual X5660 Xeon system, each CPU with 6 cores and 6 HT's, you would be best served by disabling hyperthreading so all your cores were functional. And just to confirm, the script itself addresses a purely cosmetic problem. It does not enable the use of CPU hardware. DSM uses the available cores and threads subject to the limitations above, regardless of what Control Panel says.
  16. Just confirming, your USB boot is in BIOS mode, not EFI boot? Sometimes that is called Legacy or Compatibility Mode or CSM in the BIOS.
  17. You could have done this if you wanted a clean install and your data too.
  18. I don't understand the first question. The second question makes me nervous about your understanding of where your data is and what you are doing. If you are going to try and do a clean install and then add your data back in... the drive that is new, that has no data on it, should be in the first port/port #0/blue according to your picture. Your data drives should not be connected at all. Once it is up and running, you can add your data drives on any of the other three ports.
  19. Yes, I mean exactly that. The new drive must be plugged into the first port. When you add in the array drives they must be plugged into the second and third ports. This is clearly listed in the post, I don't know why you are asking me to confirm again what is already posted. If you were asked to migrate and it was the only drive in the system, you have used this drive with DSM before. Clear the partition structure using another PC before trying to install.
  20. If you are trying to do a clean install it won't work unless you follow my directions. The new drive MUST be the first drive. And you should not have any other drives in the system, so it should not ask to migrate. In the upgrade thread, there are two reports for old Dell Optiplex: https://xpenology.com/forum/topic/29401-dsm-623-25426/?do=findComment&comment=148303 https://xpenology.com/forum/topic/29401-dsm-623-25426/?do=findComment&comment=148305
  21. Technically, 6.2.1/6.2.2 break drivers in 1.03b. 6.2.3 now allows Jun's native drivers to work again, which makes it compatible with a lot more hardware. extra.lzma (recent compiles of drivers) as posted here applies when there is a need for drivers not originally compiled in the loader, or if there are incompatibilities with your hardware and those drivers (i915 drivers seem to be particularly problematic). Generally not an issue for plain vanilla Intel stuff. I just corresponded this morning with someone who upgraded an Optiplex 790 to 6.2.3 with no added drivers and no issues. I believe you should get a clean install working before you make any other decisions.
  22. The values in /etc.defaults/synoinfo.conf are copied to /etc/synoinfo.conf each reboot. You will probably need SataPortMap=188 and DiskIdxMap=180008 for your drives to be properly visible spanning controllers. If you are asking whether this is new for version 6, you need to go to the FAQs and spend some time reviewing the 6.x loader instructions and how to implement those settings into the loader.
  23. Yes, a clean install would involve setting up a new copy of the loader. The RAID configuration of your data disks doesn't matter.
  24. If you cannot afford to lose the data, why were you upgrading? No backup? At least test using a spare disk? Anyway, does your system support another drive? If so, you can just install clean and then attach your data drives as long as they did not get overwritten by your upgrade procedure. Remove your data disks, get a third drive that has no data on it and do an initial install with loader 1.03b and DSM 6.2.3. If you have difficulty with a basic 6.2.3 install on the clear install, at least you can troubleshoot and fix it without risking your data. Once DSM boots normally, do NOT create a Storage Pool or Volume. Just make sure DSM is booting and working then shut it down. Attach your data disks. It is important that the drive you just did the initial install on remains as drive #1 (or the lowest-numbered port). The data disks you add should be #2 and #3 if your sig is accurate. Boot up and your volume and data should be accessible. You'll need to "Fix the System Partition" to overwrite DSM on the array disks. As a result any DSM customization you may have made will be lost. If you did a DSS backup of your DSM settings you can re-import it and at least those items (user accounts, etc) will be restored. If you had complex folder perms they may be lost. Packages will need to be reinstalled. Make certain everything is green in Storage Manager. If you want to get rid of the extra drive, shut down DSM and remove the disk. Boot back up. No other action required.
×
×
  • Create New...