ilovepancakes

Members
  • Content Count

    67
  • Joined

  • Last visited

Community Reputation

4 Neutral

About ilovepancakes

  • Rank
    Regular Member

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. This first method seems to have worked perfectly! Uploading a bunch of data now to fill the increased space to test if volume remains stable. I did not find that tutorial in a forum search, so thank you!
  2. @Kall So after this command I get an error that the type of array cannot have its size adjusted/expanded but if I follow the rest of the steps anyway, it all seems to work and the volume is expanded. What is this non-working command actually supposed to do and do I have any side effects of not being able to use it? EDIT: I was using JBOD mode with a single disk on the volume I tried this with. Confirming that trying this on a Basic mode volume, the above command works and returns a normal output. So I guess on JBOD volumes this command doesn't do anything. That being said, is Basic or JBOD better performing with 1 disk?
  3. Yeah I suppose this doesn't provide for easy expansion but would work. I could even backup the share with HyperBackup to a large cheap external drive and then delete the volume and make a bigger one with new VMDKs that way vSAN storage doesn't have to front all the extra space to have both VMDKs exist at same time, even temporarily. Is there any way to unofficially expand the volume after increasing the VMDK size? The disk shows new bigger size in DSM after simply increasing it in ESXi but the storage pool/volume is locked to only show the original capacity despite those underlying disks showing as bigger. Multiple DSMs in one box is certainly a reason I do it, but recovery I would say is easier if you have it all setup right. I use Veeam to backup the VM and VMDKs so if anything goes wrong at all, a few mouse clicks and DSM is completely restored to a working state. Update testing is another one, because snapshots mean I can super quickly revert back to working state if a new DSM update bricks the install. Performance on any VM I would assume is worse than the same install on bare metal but for DSM running on Dell r730xd which is what I have, the performance is so good with vSAN and 10G networking, it doesn't really matter if I get an extra couple MB/s transfer and stuff on bare metal. I also love the ability to work on DSM installs and bootloaders all remotely over a VPN since I can see the console output anywhere with ESXi, don't have to be physically at the box to change settings in Grub like SN and Mac if needed.
  4. Hi all, What is recommended config/structure in Storage Manager to allow for future expansion/increase in size of VMDK? I believe using 2 VMDKs in RAID 1 in Storage Manager will allow one VMDK to be expanded at a time and then the volume will increase but this won't work in my case because I am using vSAN so two VMDKs in RAID 1 would be a huge waste of space since each VMDK is already being made redundant by vSAN. I also get using JBOD and adding new VMDKs everytime I want to expand could work but would prefer a method of using 1 single VMDK that expands, even if the method involves a little work beyond Storage Manager. Curious for everyone's experience and input.
  5. Update: Realized that more codecs appear as you try to convert them. If hardware transcoding can't work, I'm fine with software however the transcoding quality seems way worse on 918 than my 3617 loader. I am using the same original file but when I change quality to "High" on 918 loader in Video Station, the quality is pixelated and bad, compared to changing the same video to "High" on 3617 loader where quality remains good. Any idea why 918 loader transcodes at lower quality? How can this be fixed?
  6. Just installed DSM 6.2.2 Update 3 1.04 918+. Video Station has the "hardware acceleration" box checked but videos won't transcode, the wheel just spins forever, and if I do an offline transcode it says "Error". If I uncheck hardware acceleration in Video Station settings menu, transcoding works, but even on high its poor quality and all of the codecs that I get activated on 3617 1.03 loader don't show up on this DSM. mpeg4 and hevc are missing, see below. I am using a real 918+ serial and mac pair. root@dsm-test3:/usr/syno/etc# cat /usr/syno/etc/codec/activation.conf {"success":true,"activated_codec":["h264_dec","h264_enc","ac3_dec","aac_dec","aac_enc"],"token":"2db1d7306789e41b5e3d5b0a70d702db"} I know nothing about hardware transcoding on 918, except that it supposedly can work. Do I need to do anything else to make it not return "Error" when transcoding videos? And, if there is no way for it to work, how do I get all of the codecs activated like on 3617?
  7. ilovepancakes

    DSM 6.2 Loader

    I understand there is no guarantee with future support and that who knows if Jun will even do a DSM 7 bootloader, however if I am buying some more used hardware right now for ESXi hosts what is the best CPU to get to have the best chance at continued compatibility? E5-xxxx v3 CPUs which are Haswell I think are needed to run 918 loader right? Are these going to be enough for DSM 7 too or I'll need v4 or higher CPUs?
  8. Anybody find a way to get DSM working in a virtual private server like on Vultr or Digital Ocean (without installing ESXi on the VPS and then running DSM on that)? I am able to get Jun's loader to boot on a VPS but have no way to access the DSM install pages, so guessing either the loader is not obtaining the IP over DHCP from the VPS provider or the virtual NIC the VPS uses is not compatible with the loader. Is there a way to force verbose boot of the loader to see output of boot process?
  9. Anybody able to get this working? I gave both VM serial ports, different serials, different MAC addresses, started with no volumes created, MTU 9000. The cluster creates without issue but then it reboots the passive server as last step and passive server never comes back up fully. The passive server IP is pingable, the serial output shows successful boot it looks like because the login prompt is the last entry, but DSM is not accessible from the passive IP in web browser and the cluster manager says passive server is offline.
  10. ilovepancakes

    DSM 6.2 Loader

    On that one I was 3617 too. I have a 3615 VM too but didn't try the change on that one yet. Although I am guessing it still won't work since before 6.2 SCSI did work on 3617. But, I guess its possible like you say that 3615 retained something that 3617 did not. Will try it later. As for speed, okay interesting, I just assumed it was disk related but definitely did not slower load times of DSM and logging in, etc... when I went to 6.2.1/SATA virtual disks.
  11. ilovepancakes

    DSM 6.2 Loader

    Nothing changed other than going to 6.2.2 from 6.2.1 but it never worked as described on that thread either. Is there something particular you do in terms on moving the disks to SCSI after. I boot up DSM with the SATA setup, everything works. I shutdown the VM, add a SCSI controller, change the volume1 disk only (not synoboot) to the new SCSI controller. Boot back up, and the following screen comes up. If I change the disk back to SATA controller, it seems to boot up all okay again.
  12. ilovepancakes

    DSM 6.2 Loader

    I have 1.03b 3617xs running 6.2.2 on ESXi 6.7u1. Only seem to be able to boot VMs with this setup using SATA disks. I used to be able to boot SCSI virtual disks before 6.2. Is there anyway to get virtual SCSI disks/SCSI controller on VM working again instead of SATA? The performance seemed way better with SCSI emulation.
  13. Outcome of the update: SUCCESSFUL - DSM version before update: DSM 6.2.2-24922 - Loader version and model: JUN'S LOADER v1.03b - DS3615xs and DS3617xs - Using custom extra.lzma: No - Installation type: ESXi 6.7u1 - Additional comments: Reboot required
  14. Hmm, the synoboot VMDK was always SATA, and that I didn't care about since performance doesn't matter so much for quick boot. I had my "Data" volume in DSM and the DSM Install Volume composed of VMDKs added to the VM as SCSI, never SATA for where DSM got installed to and where I keep my data. When I did a test upgrade to 6.2.1 from 6.2 the only way DSM would actually start up was if I changed the SCSI volumes to SATA (and changed NIC to e1000e). If I try adding a new SCSI VMDK as a new disk in the VM now, it doesn't show up in Storage Manager. If I change that SCSI VMDK to SATA, it shows up.