LeeBear

Members
  • Content Count

    20
  • Joined

  • Last visited

Everything posted by LeeBear

  1. I'm sure someone will correct me but if you don't want DSM to see the USB boot drive after it loads don't you just modify your boot img .cfg file and put the PID and VID of your USB device instead of the default one. The default for nanoboot is something like this: kernel zImage ihd_num=0 netif_num=1 syno_hw_version=DS3612xs sn=B3J4N01003 vid=0x0EA0 pid=0x2168 You can plug your USB drive in a windows machine and use device manager/view resource to get the VID and PID of the USB drive.
  2. The way Plex transcode is by default it will try and transcode 120 seconds of the video as fast as it can, that's why every CPU maxes out regardless of speed. Then it will throttle down and up as it tries to maintain that buffer. While transcoding is CPU intensive it's also I/O intensive as there are combination reads and writes, if you don't want your NAS services to be impacted during the transcode I suggest you put your apps/Plex Temp folders on separate spindles then your media. Personally I use an SSD for my apps/temp and keep my media/files on a separate volume.
  3. The general rule is for any .XXXX version updates you can't use the built in web GUI in DSM because we don't have a real Synology onboard Flash RAM to update. The update in the web GUI basically updates the onboard flash with the DSM .pat file, reboots, then updates the DSM partition on the hard drives. Since we obviously don't have the onboard flash RAM it will fail. We however emulate this process by replacing our boot device (IMG, ISO, USB) with the newer Nanoboot/Gnoboot version, essentially faking updating the Flash RAM. Then we used the upgrade/degrade function in the bootmenu to emulate the DSM partition update on the hard drives. There are some important differences between the two though. The process we have to do is actually more a "migration" from one DSM to another (removing the hard drives and putting it in another DSM). This is important to understand because when you migrate from one DSM to another you may lose certain settings that are tied to a DSM (domain permssions, etc). So before you migrate you should ALWAYS do a Control Panel -> Update & Restore -> Configuration Backup, this will create an .dss file you that you can use to restore the settings after the migration. Now with incremental updates, ie. .4493 update 1, you can do through the web GUI as I believe those updates don't write to the Flash RAM and just updates the DSM partitions.
  4. Ahh, another one bites the dust. I can't help you sorry, maybe someone else can. Warning for the future or other users: RDM does not pass SMART data along so XPenology can never know when a disk is failing. Set it up in ESXi or use passthrough. I have taken the disks and connected them directly to another PC. There's nothing wrong with them and the data is still there. I think this is a logical problem that somehow DSM has corrupted the partition table or something similar to that. We can argue all day about the use of RDM but Nindustries provides no real alternative as using an ESXi virtual disk doesn't pass SMART data to XPenology and doing a passthrough requires at least 2 controllers, one for the Datastore for your ESXi host and one for the passthrough to the VM. The majority of the people where won't have a dedicated controller nor would Nanoboot/Gnoboot have the driver to support it if it was passed through. Remember Nanoboot doesn't have the vast driver support like FreeNAS or VMware, that's one of the reasons we virtualize, if Nanoboot has drivers for your machine of course you're better off running it Bare Metal. Anyways back to amuletxheart's problem. First off I don't think you should ever expand a volume by more then 1 drive at a time if you only have a single drive redundancy (RAID 5/SHR) as the drive you are adding is usually untested and can cause unrecoverable error if multiple drives fail. I know it's tempting to save a rebuild but think of it like this if you expand from 4 to 5 drive and one of the drives fail you should be still able to recover data from the 4 drives as during the expansion data is written to 5 drives but only 4 is needed for recovery. If you expand with 2 drives going from 4 to 6, data is written to all 6 drives and 5 drives is needed for recovery, if the 2 new drives fail (because they are untested) then you will not be able to recover the data because you only have 4 good drives. I'm trying to figure out your problem... and I'm thinking that maybe you mapped the 2 drives to the same physical drives... is that a possibility? Also this won't help with your problem but if you change your VM's SCSI controller to LSI Logic SAS controller DSM will show your "real" drive model and serial number instead of VMware's virtual drive. This won't solve your problem but it's an easy way to see if two drives have the same serial number then you know you have two RDM's mapped to 1 physical drive. Lastly if you went back to your original 4 drive configuration before your second expansion does DSM see the volume/repair it?
  5. In hyper-V I suggest you use the ISO version of nanoboot instead of the IMG/VHD as the boot device. Just replace your IDE(0:0) boot hard drive with an IDE(0:0) CD/DVD, you won't have to do a reinstall or anything. DSM seems to always pick up the spare space in the boot drive that's why you are seeing the 2MB drive. With a CD/DVD it won't show up. If you need to modify the ISO image file you can use Winiso 5.3 it's free.
  6. In step 2 of my guide where you modify the IMG file so the boot drive doesn't show up you can also change the serial number there. There's a line that has SN= that's the serial number. Change to one you like but please backup your settings first in DSM as I believe you may lose some share permissions if you change SN as DSM now thinks it's a different unit.
  7. I have a similar setup to yours and noticed the same slower transfer speed in ESXi. I mentioned it in this thread http://xpenology.com/forum/viewtopic.php?f=2&t=3191. When I have time this weekend I will do some tests under ESXi and see if speed can improve. I think it's a driver issue (setting) with ESXi and I hope to figure it out as ESXi hosts is way easier to manage then Hyper-V Server.
  8. I'll try to answer your questions the best I can, since I am not an expert. Your first question regarding using virtual RDM and why I suggest using it instead of a virtual disk (vmdk). Besides getting better speed you get portability of your data. This means you can take your hard drives and put it in a real Synology DSM and your data will still work, or you can put the drives in a Hyper-V environment and your data will still be there. I have personally tested Hyper-V so I know it works. The reason it works is because if you use RDM you are letting the DiskStation software have direct access to the disk drive to create the DSM partitions. If we compare what is physically stored on the disk drive if you took the hard drive out and put it in a non ESXi environment you will get this (I will use a 1TB drive as an example): virtual RDM drive: 2GB Partition (DSM software version), 2.5GB Partition (Volume information), 965 GB Partition (Data) virtual (vmdk) drive: 1TB Partition with disk.vmdk file As you can see if you put a virtual drive instead of an RDM drive into a real Synology DiskStation it won't see the data or DSM partition and it will tell you to format the drive (you lose all your data). With the RDM drive it will see the DSM partition and know it came from a DiskStation and mount the volume with data in tact. Regarding your question of virtual RDM not being a good idea I don't think that's correct. We have to use the virtual RDM "trick" because VMware by default doesn't let you use a local drive as an RDM... it only allows you to use a SAN or other storage. I don't believe there's any risk in using virtual RDM because the Synology software is actually handling the RAID, if there was some error DSM would alert you. Now for your second question of installing Nanoboot directly to your system. Yes you can do that, you will need to use a USB or CD to boot off of, you can't boot off an SSD because DSM will detect the SSD drive and format it during install. Remember in a real DiskStation there is a flash drive inside that it boots off of we "fake" this using Nanoboot on a USB or CD. I will not write a short guide on doing a direct install because there are similar guides already (Look at install guides for HP N40L/N54L). The reason for this is because every system has different hardware, unlike ESXi where it doesn't matter what hardware you have the Virtual Machine will be the same. Also keep in mine that Nanoboot has to have drivers for your hardware if you want to do a direct install unless you know how to add your own drivers and compile Nanoboot yourself. You can always unplug your drives and make a USB stick with Nanoboot on it and see if your hardware is support then decide if you want to do a direct install.
  9. Click on the + before Network Adapter to expand the advance settings.
  10. Is your VM setup to use "Legacy Network Adapter"? If that's the case that would explain your 10MB/s speed and not seeing the VMQ setting. If you use the regular "Network Adapter" you should see and extra "Hardware Acceleration" option like this: From my searching around VMQ can be beneficial in some configuration and in some usage scenario but in my particular system running Nanoboot it was causing problems.
  11. Poechi's link pretty much covers how to install Nanoboot on Hyper-V whether you are using Windows 8 w/Hyper-V, Windows Server w/Hyper-V, or pure Hyper-V server. The only difference is with Hyper-V server there is no GUI you have to set it up via the command line. Once it's setup though you can remotely manage the server using RSAT on a Windows 8 machine. This Guide here pretty much tells you how to do all the command line stuff so you can manage it using a GUI on another machine. Just remember Hyper-V Server 2012 requires Windows 8 or Windows 8.1 to manage, and Hyper-V Server 2012 R2 requires Windows 8.1 to manage.
  12. I wasn't getting great speed's with ESXi 5.5, it was erratic usually fluctuating between 20-80 MB/s using virtual RDM and E1000 network. I have a feeling it's the network drivers or some settings in it that wasn't correct. Anyways to test my theory I decided to duplicate the setup on Hyper-V Server 2012 R2... this was easy to do as I had my ESXi booted off a USB stick (but Datastore on an SSD) and Hyper-V booting off an SSD. Since I didn't use virtual drives for my data the only drive I had to convert to VHD format was my 32GB virtual drive for application. Once I created the Hyper-V VM and mounted the drives, DSM started up fine, no reinstall of anything. I did some copying over the network to see if the speed was better and sure enough it is... this is the results I'm getting in Hyper-V while copying approximately 100GB of mostly 50MB files: As you can see the speeds way better then ESXi. This is a 4 drive (5400 rpm) in SHR configuration over 1 network port. I know DSM usually shows higher rates cause there's 1 drive of parity (so approximately 33% higher rate in a 4 drive configuration), Windows shows transfer rate between 90-110 MB/s very consistent. I had to disable VMQ (Virtual Machine Queuing) setting in the Network controller to get this results. With VMQ on the speed was erratic like in ESXi. I will move back to ESXi when I have time and try to figure out if there's an equivalent setting that needs to be disabled to get the same performance as Hyper-V. If there is then it's going to be a tough choice... ESXi is more widely supported and very simple to manage, while Hyper-V Server is ridiculously hard to manage (no gui, can be remote managed only by Win 8.1 machine, usually requires a domain), has good performance and lower power consumption (approx 43W vs 50W).
  13. I tried and and trashed it after didn't finished indexing in 48h. And my collection was rather small at that time Nevertheless, let's stop whining and check what this man has to say about our proposals If you have "Generate media index files during scans" enabled it will take a long time (probably more then 48 hours depending on the type of machine you have. This is different then a regular library scan which downloads the movie info and cover art. The media index actually goes through each video and generates thumbnails at various times so on players that support it you will see the "pop up" thumbnail when you are fast forwarding. Depending on your system this can take 30 minutes or more per movie. One of the awesome features of Plex is it's ability to transcode on the fly (if you have the hardware power to do it), this is what separates it from DLNA (and XBMC), also the simplicity of sharing your server (just make a free Plex account). For example besides me being able to watch my media on my TV when I'm at home (like you can with DLNA, XBMC), if I'm away from home but have an internet connection I can continue to watch my media on my tablet, phone, web browser as the Plex server will transcode on the fly whatever you're watching so it can stream smoothly over the internet. Now the sharing of Servers takes this even further. I have friends and relatives that created free plex accounts, I just add there names to my friends list and now they can also stream media from my server. You have a Samsung Smart TV... my brother who lives in another city has a Samsung Smart TV with Plex app, he just sets it up with his free plex account and now he can stream movies over the internet from my server as if they were local, you can't do that with DLNA. Plex is quite powerful for sharing media.
  14. Are you saying you're only getting 10 mb/s = 1 MB/s transfer speed? That doesn't sound right. If you mean you are getting 10 MB/s transfer speed then check your network adapter setting it probably means it's set to 100 mb/s instead of Gigabit speed. Also in the virtual network adapter setting disable VMQ (Virtual Machine Queuing) it seems to slow down or make the network transfer erratic. I was getting around 25 MB/s with VMQ enable now I'm getting this: Not bad for 5400 rpm drives in SHR. I was never able to achieve that kind of speed when running ESXi (erratic 20-80 MB/s) which made me try out Hyper-V Server 2012 R2. So far I'm pretty impress with Hyper-V's performance and lower power usage (I measured around 7-8W lower) but I'm finding it very difficult to manage (no device manager are you kidding me!)
  15. We need to know how your system is setup, especially the Physical Network Port. If you setup the port to only allow internal connections that will explain why you don't have internet access.
  16. Awesome to hear that the upgrade worked for you. In the future you can avoid losing user/groups/shared folders permission by backing up your configuration file then restoring it after. Even though it only took you 2 minutes to fix this time it may not always be the case in the future. I guess the important thing to understand is when we use gnoboot or nanoboot we can't do a normal "upgrade" like on a real DiskStation because our setup doesn't have an internal boot flash that can be updated. Everytime we move from one DSM version to another we are actually doing a "Migration" from one DiskStation to another, while we can retain the data on the disks we lose some configuration settings (like your folder permissions, etc). It's why backing up your configuration is always a good idea.
  17. There shouldn't be any problems upgrading to DSM 5 as long as you carry over the entire set of disk (RDM and any virtual disk you have). You should try to also keep the disk order the same (although I've never had any problems out of order disks). What you are doing is essential "Migrating" (taking a set of disk and moving it to another) from one DiskStation to another and should take the proper steps to backup your configuration files before doing it. I don't have an HP N54L, but I have migrated from gnoboot 5.0-4458 to nanoboot 5.0-4482 without any problems. What I did was turn off the VM running gnoboot, create a new VM running Nanoboot using my guide and just attached the old drives (from gnoboot) and during the install step when you use Synology Assistant to find your DiskStation... it showed as Migrateable. If it says that then you are good (it's seeing your drives with DSM partition) and when you do the install you won't get that popup that says "Disk X, Y, Z will be deleted". If it doesn't say Migrateable then I'd come back and ask for more assistance. Goodluck.
  18. You need to provide more information. Having the "Partition layout is not Diskstation style" is normal. What step are you stuck on?
  19. I created RDM mappings for the physical drive using this guide. (ignore the last step about the SCSI IDs). Make sure the physical drives aren't part of your "Datastore" and you will be able do this. Mine looks like this: Don't worry about the 4TB size, it's not actually using up 4TB on your Datastore drive. The benefits of doing it this way (instead of a regular Virtual Disk) is faster speed, and in theory you should be able to physically remove those drives later on and put them in a real Synology Diskstation as the raw data/layout will be DSM's RAID/SHR (I used SHR) style. The only drive I have visible to the ESXi host is my 256GB SSD, which houses the Datastore. The four phyiscal 4TB drives are essentially passed to the VM running DSM. The Raid matrix is created and handled by DSM (just like a real Synology DiskStation), no third party controller, etc. My Setup is like this: 32GB USB Stick - VMware ESXi is installed onto this drive and boots from it 256GB SSD - Added to Datastore on ESXi host, stores VMs, and Virtual Disk 4 x 4TB - Mapped Raw LUN to the VM running Diskstation
  20. I've been lurking in this forum for awhile and have learned quite a bit and figured it's time to contribute back and maybe help someone else. I'd like to thank Trantor, sancome, Diverge, Poechi, and anyone else I've missed with sharing there knowledge that made this guide possible. This guide will provide step by step instructions to create the "Perfect Install" of Nanoboot on your ESXi host. I am calling this the "Perfect Install" because when you are done there won't be any extra "unused" disk showing up in DSM, nor will your disks start in the 3rd or 4th slot, also your Nanoboot boot drive won't get overwritten during install. You are left with something like this... Nice, Clean, Perfect! Now Let's get started! Requirements: Nanoboot IMG file, StarWind V2V Converter, WinImage, Synology DSM 5.0-4482 .pat file Overview of steps: 1. Create the VM 2. Modify IMG file to prevent boot drive from showing up in DSM 3. Convert Nanoboot IMG to VMDK and upload to Datastore 4. Adding Hard Drives to your VM 5. Installing Synology DSM 5.0-4482 1. Create the VM - Use VSphere Client and Create new Custom VM on your ESXi host. - Name it what you want. [i used DiskStation] - Store VM where you want. [I used a fast SSD Drive] - Choose Virtual Machine Version 8. - Choose any 64 bit Linux as Guest Operating System. [I used Ubuntu Linux 64-bit] - Configure CPU/RAM according to what you have. [I used 8 core, 4GB RAM] - Choose “E1000” as network adapter. - Choose “VMware Paravirtual” as SCSI Controller. [Other may work as well] - Choose Do Not Create Disk - Check “Edit before completion of VM” - Remove CD and Floppy Drive from VM configuration. [Not necessary but I don’t like unnecessary devices] 2. Modify IMG file to prevent boot drive from showing up in DSM - Start up WinImage - File -> Open, Select “NanoBoot-5.0.2.4-fat.img” - Browse to \boot\syslinux right click -> Extract on syslinux.cfg - Open syslinux.cfg that you just extract with Notepad - Add the “rmmod=ata_piix” (without quotes) to the end of the lines that start with “kernel /ZImage”… *Should be 5 lines, but the only 2 really required are the ones labeled “MENU LABEL Synology DSM 5.0” and “MENU LABEL Synology DSM 5.0-4482” - Save this modified syslinux.cfg file and “Inject” it back to the img file using WinImage (overwrite the file when asked) - Save Current Image before you exit WinImage 3. Convert Nanoboot IMG to VMDK and upload to Datastore - Start up Starwind V2V converter - Choose the NanoBoot-5.0.2.4-fat.img file you want to convert - Choose “VMware pre-allocated image”, Choose “IDE” type *Choosing "IDE" type is very important - This will created 2 vmdk files, NanoBoot-5.0.2.4-fat.vmdk and NanoBoot-5.0.2.4-fat-flat.vmdk. - Upload File to ESX Host Datastore (configuration tab -> Storage -> right click drive -> Browse Data Store) - Upload both vmdk file to the folder of your VM Name created in previous step, they will merge to a single vmdk file automatically. 4. Adding Hard Drives to your VM - Edit your virtual machine setting - Add a hard drive - Choose “Use an existing virtual disk” - Choose the vmdk you uploaded in the previous step. - Make sure drive is set as IDE (0:0) and check “Independent -> non-persistent” *IDE (0:0) is important because this is the boot disk that starts up Nanoboot* *non-persistent is important because it prevents the non-booting situation after you do the DSM install. Technically explanation is during DSM install all hard drives gets repartition including the nanoboot drive, the non-persistent setting makes these changes temporary and after reboot the original nanoboot boot partition is returned. [Optional but recommended step] - Add another hard drive - Choose “Create a new virtual disk” - Choose size 8+ GB, Thick Provision Lazy Zeroed. [If you will be using Plex transcoding I suggest 32GB as Plex uses alot of temp space during transcoding] - Choose SCSI ID (0:0), check “independent -> persistent” *This drive will become Volume1 of your NAS later on. It is the default location where Synology Apps are installed so having them on a virtual disk [sSD] makes it faster to launch apps. If you omit this step your mechanical (data) drives you add later on will become Volume1 and Apps will install on the same volume as your data. Having a separate drive also allows you to make snapshotting (backup) easier (only 8-32GB instead of TB’s).* Adding your Data Drives -Add Drives to your VM like you’ve previously done. [use Raw Device Mapping if you can for best performance] If your system doesn’t support it you must create virtual RDM first. Make sure they are all using SCSI ID's. *IDE drive types will not show up in DSM with this install only SCSI types. SCSI (0:0) will be the drive in first slot of, SCSI (0:1) the second slot, etc. Your VM should look something like this when you are done. 5. Installing Synology DSM 5.0-4482 - Right Click select “Open Console” so you can view the VM - Start the VM, you will see Nanoboot screen, then the menu… choose “Upgrade/Degrade” - On Next Menu choose the DSM version you want to install. [In our case 5.0-4482] - Once fully booted use a web browser and goto the IP address of the DiskStation. You can use Synology Assistant to find the IP. - Choose “Install file from my computer or installation disc” select your .pat file *Uncheck “Create a Synology Hybrid (SHR)” volume after installation” - Wait a few minutes and you should see the DSM Login screen. - Log in using admin account. - Skip the Quick Connect setup [You can’t use Quick Connect because our DiskStation doesn’t have a real serial number in Synology Database] - Goto Storage Manager -> Volume and create your Volumes DONE!