ilovepancakes

Members
  • Content Count

    59
  • Joined

  • Last visited

Community Reputation

2 Neutral

About ilovepancakes

  • Rank
    Regular Member

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Anybody able to get this working? I gave both VM serial ports, different serials, different MAC addresses, started with no volumes created, MTU 9000. The cluster creates without issue but then it reboots the passive server as last step and passive server never comes back up fully. The passive server IP is pingable, the serial output shows successful boot it looks like because the login prompt is the last entry, but DSM is not accessible from the passive IP in web browser and the cluster manager says passive server is offline.
  2. ilovepancakes

    DSM 6.2 Loader

    On that one I was 3617 too. I have a 3615 VM too but didn't try the change on that one yet. Although I am guessing it still won't work since before 6.2 SCSI did work on 3617. But, I guess its possible like you say that 3615 retained something that 3617 did not. Will try it later. As for speed, okay interesting, I just assumed it was disk related but definitely did not slower load times of DSM and logging in, etc... when I went to 6.2.1/SATA virtual disks.
  3. ilovepancakes

    DSM 6.2 Loader

    Nothing changed other than going to 6.2.2 from 6.2.1 but it never worked as described on that thread either. Is there something particular you do in terms on moving the disks to SCSI after. I boot up DSM with the SATA setup, everything works. I shutdown the VM, add a SCSI controller, change the volume1 disk only (not synoboot) to the new SCSI controller. Boot back up, and the following screen comes up. If I change the disk back to SATA controller, it seems to boot up all okay again.
  4. ilovepancakes

    DSM 6.2 Loader

    I have 1.03b 3617xs running 6.2.2 on ESXi 6.7u1. Only seem to be able to boot VMs with this setup using SATA disks. I used to be able to boot SCSI virtual disks before 6.2. Is there anyway to get virtual SCSI disks/SCSI controller on VM working again instead of SATA? The performance seemed way better with SCSI emulation.
  5. Outcome of the update: SUCCESSFUL - DSM version before update: DSM 6.2.2-24922 - Loader version and model: JUN'S LOADER v1.03b - DS3615xs and DS3617xs - Using custom extra.lzma: No - Installation type: ESXi 6.7u1 - Additional comments: Reboot required
  6. Hmm, the synoboot VMDK was always SATA, and that I didn't care about since performance doesn't matter so much for quick boot. I had my "Data" volume in DSM and the DSM Install Volume composed of VMDKs added to the VM as SCSI, never SATA for where DSM got installed to and where I keep my data. When I did a test upgrade to 6.2.1 from 6.2 the only way DSM would actually start up was if I changed the SCSI volumes to SATA (and changed NIC to e1000e). If I try adding a new SCSI VMDK as a new disk in the VM now, it doesn't show up in Storage Manager. If I change that SCSI VMDK to SATA, it shows up.
  7. Normally my disks show as "Healthy" in green text but occasionally I get them in Red and it says "Failing" or something like that but it still works. Only one time have I gotten a VMDK that actually reported as crashed even though ESXi showed everything is fine. I think I got it working again by removing the VMDK from the machine, rebooting and then adding it again, rebooting. That VMDK was then available to "Repair" the now degraded storage pool with. If that doesn't work, assuming this VMDK is part of a storage pool with RAID 1, 5, etc... maybe delete the VMDK then create a new one and rebuild the array onto that? Backup data first of course before trying any of the above.
  8. Anyway to get SCSI VMDKs on ESXi 6.7u1 working on 1.03b loaders with DSM 6.2.1+? It seems once I get upgrade past 6.2 I have to change my VMDKs to SATA controllers (among a few other changes) to get DSM to boot and work. 6.2.1+ can't recognize SCSI VMDKs? I want to use SCSI because performance was way better with that before than SATA now.
  9. Awesome help! I was able to make it work with that script as a base with some slight modifications. Name of service had to have "ctl" after "pkg" and no "-synovideopreprocessd" after package name. #!/bin/sh if synoservicectl --status pkgctl-VideoStation | grep 'start/running' then curl --retry 3 https://example.com else curl --retry 3 https://example.com/fail fi Thank you @Olegin!
  10. Hi all, I need a little assistance with direction on how to write a simple script involving the package status (start or stop) for specific packages in DSM. I am setting up a service monitor and want to be able to "read" the status of some of the packages running on DSM (Chat, VideoStation, etc...). The way it works is, a URL is provided by the service monitor and linux can "check in" using the specific URL to show it is alive. What I want to do is have a scheduled task on DSM execute bash commands (every 5 minutes) to check if a package is started and if it is, then curl or wget the URL to the service monitor. Creating the scheduled task and using curl to ping the URL I have a handle on but what commands can be run before the curl to make sure the package is in a start state before getting to the curl? Ideally I would imagine a command that returns an error or nothing at all would be best because then I could simply enter "status_check_command && wget https://ping.com/" and I think the wget won't fire unless the status_check_command runs successfully? Is that right? I have only come across commands (synoservice) though that return something all the time. That command with --status [service name] will output back out if the service is start or stop. But how can I create a command that only moves to the curl if the status returned is started?
  11. - Outcome of the update: SUCCESSFUL - DSM version prior update: DSM 6.2 Update 2 and 6.2.1 Update 6 - Loader version and model: JUN'S LOADER v1.03b - DS3617xs - Using custom extra.lzma: NO - Installation type: VM - ESXi 6.7u1 - Additional comments: Reboot required. Had to change all attached VM disks to SATA (no SCSI), NIC to e1000e, and USB 3.0 controller only (2.0 causes hang on boot).
  12. Wow didn't even know I could do that and see the output of it booting. Looks like it was hanging at the USB module loading. I changed the USB controller on the VM to 3.0 instead of 2.0 and it worked!
  13. I am running ESXi 6.7 with Jun's 1.03b loader for 3617xs. I finally found a combination of settings that let's me install and run DSM 6.2.1 then upgrade to 6.2.2 (e1000e NIC and SATA HDDs only). The SATA virtual disks was the key part I was missing when trying before. But, I have an existing DSM install with same loader that is running 6.2. I change the NIC to e1000e and change the VMDKs to SATA instead of SCSI. I boot back up and 6.2 successfully works with e1000e and SATA drives it shows now. So the pre-upgrade changes successfully worked and boot. Now I login to 6.2 and choose my 6.2.1 PAT file, it does the upgrade and reboots, but DSM now doesn't come back on network. Any ideas why I would be able to get 6.2.1 working with 1.03b 3617xs as a new install but not be able to upgrade an existing one even though I changed over the disks to SATA and NIC to e1000e. Anything else I have to do special to get this working? I feel so close!!!!
  14. Did you do anything special to get it to work? I have an R720xd running ESXi. I am using 3617xs loader 1.03b with e1000e NIC. If I install 6.2 it works fine. If I install 6.2.1 pat with exact same setup it breaks (tried upgrading to 6.2.1 from 6.2 and just installing 6.2.1 direct from beginning).