Napalmd

Transition Members
  • Content Count

    19
  • Joined

  • Last visited

Community Reputation

1 Neutral

About Napalmd

  • Rank
    Newbie
  1. Are you using onboard nic? I recently configured one n54l with onboard nic only, it works. if it is enabled make sure you have both nic's mac address on grub. after installation check with the mobile app fing if the server is online on some ip and test the lan cable on both ports. and are you using an extra file? you don't need it. Also try not using the dsm file you downloaded, let the installer download latest version.
  2. I patched it but no difference, it works as usual. It would be cool if you could patch the surveilance station so we could add more than 2 cameras without buying the license. I get this error every time I open it, it works fine nonetheless 2020-06-27T19:35:41+01:00 FileServer synoscgi_SYNO.SurveillanceStation.Notification_1_GetRegisterToken[7464]: pushservice_update_ds_token.c:50 fgets failed 2020-06-27T19:35:41+01:00 FileServer synoscgi_SYNO.SurveillanceStation.Notification_1_GetRegisterToken[7464]: pushservice_update_ds_token.c:145 Can't set api key 2020-06-27T19:35:41+01:00 FileSe
  3. I'v been using surveilance station without problems, but I will patch it to see if there are any differences...
  4. thanks for the patch. I also have a synocodectool in: /volume1/@appstore/SurveillanceStation/bin/synocodectool does it need to be patched too? cat /usr/syno/etc/codec/activation.conf {"success":true,"activated_codec":["hevc_dec","ac3_dec","h264_dec","h264_enc","aac_dec","aac_enc","mpeg4part2_dec","vc1_dec","vc1_enc"],"token":"123456789987654abc"}
  5. Just an update. the scrub failed after about 8tb scanned with no errors, then the partition got mounted read only with errors in log: [17129.062094] BTRFS: bad tree block start 9533940502566446060 7968269844480 [17150.566382] BTRFS: bad tree block start 9533940502566446060 7968269844480 [17150.566692] BTRFS: bad tree block start 9533940502566446060 7968269844480 [17150.567000] BTRFS: bad tree block start 9533940502566446060 7968269844480 [17167.924873] verify_parent_transid: 14 callbacks suppressed [17167.924876] parent transid verify failed on 21004288 wanted 11024 found 8744 [17167.97807
  6. I will do that, but in the meantime I tried to mount the volume in a ubuntu live following synology website instructions, but couldn't, it gave me error wrong fs type, bad option, bad superblock. so I rebooted again to synology. it mounted the volume in read only. I did: syno_poweroff_task -d vgchange -ay mount /dev/vg1000/lv /volume1 btrfs scrub start /volume1 btrfs scrub status /volume1 scrub status for f850dff8-f678-4c95-bff2-667ffc8ff747 scrub started at Sun Jun 21 14:58:16 2020, running for 00:10:55 total bytes scrubbed: 389.50GiB with 0 er
  7. no. all disks are fine. I already had bad drives in the past but replaced them with no problems. this time was that i could not enter on the server by ssh or browser so I did a reset
  8. I cand mount the volume now, but I don't know if I am doing more damage to the filesystem should I make a btrfs scrub? I really don't know what to do now. maybe I'll make extensive smart test on all drives. root@Servidor:/tmp# mount /dev/vg1000/lv /volume1 root@Servidor:/tmp# ll /volume1 total 936 drwxr-xr-x 1 root root 174 Jun 20 01:09 . drwxr-xr-x 26 root root 4096 Jun 20 01:09 .. drwxr-xr-x 1 root root 18 Apr 2 17:16 @appstore drwxr-xr-x 1 admin users 76 Mar 29 16:21 @database drwxrwxrwx 1 root root 54 Jun 18 12:00 @eaDir drwxr-xr-x 1 root root 37
  9. raid sync come out with no problems: root@Servidor:/tmp# cat /proc/mdstat Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [raidF1] md2 : active raid6 sdb5[5] sdf5[6] sde5[4] sdd5[7] sdc5[2] sda5[1] 15609134592 blocks super 1.2 level 6, 64k chunk, algorithm 2 [6/6] [UUUUUU] md1 : active raid1 sda2[0] sdb2[1] sdc2[2] sdd2[3] sde2[4] sdf2[5] 2097088 blocks [12/6] [UUUUUU______] md0 : active raid1 sda1[0] sdb1[1] sdc1[4] sdd1[2] sde1[3] sdf1[5] 2490176 blocks [12/6] [UUUUUU______] unused devices: <none> root@Servid
  10. Thanks but I don't think it is the same problem. my raid is fine. it is now resyncing because of a power loss. my problem is the btrfs partition with errors. mdadm --detail /dev/md2 /dev/md2: Version : 1.2 Creation Time : Wed Dec 3 15:53:41 2014 Raid Level : raid6 Array Size : 15609134592 (14886.03 GiB 15983.75 GB) Used Dev Size : 3902283648 (3721.51 GiB 3995.94 GB) Raid Devices : 6 Total Devices : 6 Persistence : Superblock is persistent Update Time : Sat Jun 20 13:29:51 2020 State : clean, resyncing Active Devices : 6 Working Device
  11. I have another server an hp microserver gen8 with dsm 6.1 with btrfs for more than a year and never got any problem. this one is an older board with the more recent dsm 6.2.2. I now see that there is already dsm 6.2.3. maybe I'll try it. I tested memory for 2hours and no problems, now the server is doing a raid resync because it wasn't responding after I rebooted it. anyway I started a post, if you could check the logs I'll appreciate it.
  12. Hi this happened yesterday and a reboot fixed the problem but today it didn't fix. Volume appears in crash mode, and read only mode. I logged in from ssh and looked at the logs, I saw this in dmesg after the reboot: then in messages log I found many of these I tried to do a btrfs --repair but it didn't work and didn't took note of the error. now the server is shutdown and I have to decide what to do next This is a: DSM 6.2.2-24922 Update 4 - Loader version and model: JUN'S LOADER v1.03b - DS3615xs - Using custom extra.lzma: YES
  13. This is happening to me to. I'm seeing a lot of crashed btrfs volumes. is this a general problem? All drives are fine but btrfs volume crashed and mounts only read only. tried the repair but does not repair.
  14. - Outcome of the update: SUCCESSFUL - DSM version prior update: DSM 5.0 / Nanoboot / DS3612xs - Loader version and model: JUN'S LOADER v1.03b - DS3615xs - Using custom extra.lzma: YES - Installation type: BAREMETAL - Intel Server Board S3200SH + Core2Duo E7500 - Additional comment: moved 5 drives from a HP N36L (one of them with smart errors, and other that could not expand to full capacity). Tested the new server before with other drives to check funcionality. put the 5 drives in, booted fine, migrated, and expanded with no problem. it is replacing the bad drive now.
  15. - Outcome of the update: SUCCESSFUL - DSM version prior update: fresh install - Loader version and model: JUN'S LOADER v1.03b - DS3615xs - Using custom extra.lzma: YES - Installation type: BAREMETAL - Intel Server Board S3200SH + Core2Duo E7500