Jump to content
XPEnology Community

Saoclyph

Transition Member
  • Posts

    5
  • Joined

  • Last visited

Everything posted by Saoclyph

  1. Saoclyph

    DSM 6.2 Loader

    My proxmox VM conf: bios: seabios boot: c bootdisk: sata0 cores: 6 cpu: kvm64 hostpci0: 02:00.0 # Passthrough my HBA PCI hotplug: disk,network,usb memory: 12000 name: dsm6-jun-v1.0.3b net0: e1000=CE:7B:13:B7:69:82,bridge=vmbr0 net1: e1000=AE:ED:7B:C4:44:5B,bridge=vmbr1 numa: 0 onboot: 1 ostype: l26 sata0: local:100/synoboot_ds3617_v1.03b.raw,size=52429K serial0: socket smbios1: uuid=7c25000f-d5d1-454d-ac71-af6a0a89bf1a sockets: 1 @Polanskiman could you edit my post ID 124 of this thread, to change "/.xpenology" to "/.xpenoboot" on the "additional comments" line and the last paragraph. I misspelled the folder name, but it won't let me edit it myself.
  2. Saoclyph

    DSM 6.2 Loader

    @extenue, @Lennartt, @wenlez, @pateretou, @sashxp, @enzo, @dodo-dk - Outcome of the update: SUCCESSFUL - DSM version prior update: 6.1.7 Update 2 with Jun's loader v1.02b - Loader version and model: Jun's Loader v1.03b - DS3617 - Using custom extra.lzma: NO - Installation type: VM Proxmox 5.2.6 - Xeon D-1537 (need to pass kvm64 cpu type), passthrough LSI SAS2116 with 5 x WD RED 3TB Raid5, 2 x WD RED 4TB Raid1 & 2 x Intel DC S3700 200GB Raid1 - Additional comments : SeaBIOS, loader on sata and ESXi boot line. Update to U2 ok. Had to replace/delete remnant files from older loaders in /etc/rc.*, /.xpenoboot (see last paragraph below). Using the usb method, I got a "mount failed" as others on Proxmox, but it was successful when using a sata image disk: rename the loader with a .raw instead of .img and place it in the VM images folder /var/lib/vz/images/100/ (Proxmox parser does not understand .img) add a sata0 disk in the vm .conf (/etc/pve/qemu-server/100.conf) : sata0: local:100/synoboot_ds3617_v1.03b.raw,size=52429K choose sata0 in Option/Boot Order in the GUI at start in the GUI console, choose the ESXi boot line My vm ID is 100, replace it with yours. I also had to choose the kvm64 cpu type. A serial port is also a good thing to have for debug. You can access the serial console with the following line (type Ctrl-O to exit): socat UNIX-CONNECT:/var/run/qemu-server/100.serial0 STDIO,raw,echo=0,escape=0x0f The serial port was very needed in my case. After I first updated from 6.1 to 6.2, the VM was starting well (docker and ssh were Ok) but I was not able to logging into DSM, and after ~5 mins from boot, everything was shutting down and I was losing network (as @JBark). I thought it had completely shutdown. But using the serial port, I saw that it just killed everything Synology related even the network config. With a fresh VM, it was working well, so tried to find differences between the DSM filesystems. I found that a lot of /etc/rc.* files where referencing old binaries that do not exist anymore so I replaced all the /etc/rc.* files by the ones from the fresh installation. When rebooting, it was still closing down after 5 mins, but I think it was needed in combination with the following removals. I also saw several /etc/*.sh scripts, and a /.xpenology folder, that were not there in the fresh installation. After deleting them, and cleaning a little the /etc/synoinfo.conf file (removed support_syno_hybrid_raid option, and some other options, not sure if it had an effect), everything was working great again! @jun Thanks for the loader!
  3. I already did use this method to launch the smart tests with the standard Test Scheduler of the Storage Manager in the web interface. But it only launches the tests, it does not work to get the smart data in the web interface. The method I presented in the 1st post is easier to implement (no need to ssh into the NAS), that's why I used it instead. But if you still want to use the standard Test Scheduler, you need to move smartctl to smartctl1 using ssh. Once your are log into your NAS with ssh, type: cd /usr/syno/bin mv smartctl smartctl1 touch smartctl chmod +x smartctl vi smartctl And paste the following code after typing the letter "i" for Insert. Then "ESC", ":wq", and enter. # fixes the drive type for smartctl under ESXi #log_file=/root/smart_calls.log #echo `date` >> $log_file #echo $* >> $log_file if [ "$1" = "-d" ]; then shift 2 smartctl1 -d sat $* else smartctl1 $* fi Even if the log of smartctl callings shows that it's called when we access the Storage Manager in the web interface, nothing shows up there. So there surely has something else that the web interface is doing. And I don't know how to do something similar to this script with .cgi files. regards, Saoclyph
  4. I only tested this method in ESXi, but it should work on any other virtual environment with 'Raw' device. In the DSM/ESXi case, ESXi forwards correctly the SMART commands to the drives, but it's DSM that does not manage to identify correctly the type of the hard drives. So instead of using 'smartctl -d sat', DSM uses 'smartctl -d ata' which is not the correct type for the drives, and it fails. @Poechi you can use it in your tutorials (I don't have enough posts yet to be authorized to answer your pm )
  5. Hi all, I will present a simple way to obtain smart data in ESXi and a very crude way of viewing it. My configuration : HP microserver N54L ESXi 5.5.0 Update 1 Nanoboot-5.0.2.4 with XPenology 5.0-4482 3 3TB WD Red configured manually as RDM drives VMware Paravirtual SCSI driver Getting the smart data To get the complet smart data, you need to run a short or long smart test. To do that, you need to create a new User-defined script in the Task Scheduler of the Control Panel. For the new script, in the General tab, User must be root to have proper access, and you need to copy the following code in the Run command box : for dev in `ls /sys/block/|grep sd`; do smartctl -d sat -t short /dev/$dev done Replace short by long if you want to do a long test. In the Schedule tab, select so it is run once a day. Then we need to actually get the smart data generated by the smart test. You need to add another User-defined script with the following code : for dev in `ls /sys/block/|grep sd`; do smartctl -d sat -a /dev/$dev > /volume1/web/smart/$dev.smartdata done Schedule it hourly, even if no new test is executed, some of the data are update (temperature, power On hours...) This will make a file per hard drive, it needs to be placed where the Web Station can read it, if you want to used the Web Station to show the results. The smart data for each drive are in a separate file, now it only needs to be parsed. I did a small php parser, but it could be done much more nicely by someone who actually know php or any other language the Web Station can execute. (very) Crude way of viewing the smart data To view the smart data we obtained in /volume1/web/smart/sdX.smartdata, I did a small parser in php. I don't know much in php/html so it is a very crude viewer. I placed the following code in a file named index.php in a subfolder "smart" of the "web" folder. The "web" folder being the active folder for the Web Station of DSM. > <?php // list of hdd exec('ls /sys/block/|grep sd',$devs); echo " Smart data "; // process smartctl output for each hdd // sdX.smartdata must have been generated in the same folder as this .php foreach ($devs as $dev) { $filename = $dev .'.smartdata'; if (file_exists($filename) && is_readable ($filename)) { $output=file($filename); echo " Device : $dev Raw Smart results "; echo "</pre> <table>$match[1]: $match[2]$match[1]: $match[2]$match[1]: $match[2]$match[1]: $match[2]$match[1]: $match[2]$match[1]: $match[2]$match[1]: $match[2]$match[1]: $match[2]</table>";<br>} else {<br>echo "<br>Device : $dev <br>";<br>echo "Missing smartctl output file $filename<br>";<br>}<br>}<br>?&gt Then to view it, you just need to load the page http://YourNasIPaddress/smart/
×
×
  • Create New...