Jump to content
XPEnology Community

luckcolors

Transition Member
  • Posts

    13
  • Joined

  • Last visited

Posts posted by luckcolors

  1. On 7/14/2022 at 5:22 AM, tahuru said:

    Hi, 

     

    Is there a simple way to upgrade to 0.9 from 0.8 without running all the steps again?

     

    Thx!



    I think the easiest solution to accomplish this would be to introduce a second empty image as the TC home directory.
    This way TC would be stored on one image that can be easily be swapped for another on update.

     

    This of course has the drawback of requiring 2 disks in the vm for these images.

    As follows:
    TC.img -> mounted as TC's /root

    TCHome.img -> mounted as TC /home/tc

     

    @pocopicoWould it be possible to implement this?

  2. Also isn't it possible to define additional extensions to be installed in the user_config.json?

    I remember seeing in scripts that this is possibly already doable but i'm unsure.

    It's much more convenient and intuitive to have all the options needed to build an image inside a singular file (or two for the cases wich need the DTC files).

    Since if the config file for extensions is stored in the lkm build folder then it would be wiped everytime you do ./rploader clean.

     

    It would be cool to have the model and version be optionally defined by the filename or json options for example: user_config.ds918p-42661.json instead of being manually specified everytime.

    Then scripts could just be pointed at the right user_config file for building an image and so on.

    • Like 1
  3. 21 hours ago, RedCat said:

    I tired and install 2 new VM system on proxmox. All hardware are same (4 cpu, 4 GB ram, sata0=redpill, sata1= 100GB virtual disk and Virtio lan card)

    VM1: 918+ 7.0.1 The Redpill auto detect Virtio lan card, build loader, 7.0.1 starting, no need any extension

    VM2: 918+ 7.1.0 The Redpill dont "see" the Virtio lan card, build loader, boot, but the loader dont has IP address, I cant access the DSM

     

    Why, and what happend?

     

    I spent so long on this issue myself.
    It seems that for some reason 7.1.0 42661 does not install the default virtio (9pfs was never included by default) extension anymore.

    Logging into DSM the console and doing: "ip a" revealed only the loopback interface present.

     

    I had to manually add the extension then it worked.

    If i remember correctly ThorGroup left it as a default ext to be installed since they had a boot time check for wether virtio needed to be loaded or not anyways.

    Check here: https://github.com/RedPill-TTG/redpill-virtio
    So if this was an intentional change i think it should be reverted? @pocopico (if u are the right person to at for this, since i'm now using TC for building the loader)

     

    You can install the extension you need manually, by using "./rploader ext" with the urls in the quoted post below.

    I don't think v9fs is always needed, you should check wether you actually need it.

     

    20 hours ago, pocopico said:

     

    i think, there are already these two different extensions that can cover your needs

     

    https://github.com/pocopico/redpill-load/raw/master/redpill-virtio/rpext-index.json

    and 

    https://github.com/pocopico/rp-ext/raw/main/v9fs/rpext-index.json

     

  4. Hello @ThorGroup

    I'm trying to run redpill for 6.2.3-25556 on proxmox.

    Thank you for your work so far, the virtio network drivers are working perfectly and i've been able to connect to the DSM. :D

     

    I was wondering if it's not yet supported using the virtio block/virtio SCSI drivers for speeding up disk access.

    I suppose sata emulation would be fine for normal HDDs but for running an Nvme SSD cache it's going to be a bottleneck.

     

    I know it's definetely not a must-be-done-now kinda issue for the beta release but would it be possible to add support for these kind of virtual drives?

    There's two different virtio implementation that can be used:

    •  Virtio Block (1 pcie address per device)
    •  Virtio SCSI (1 pcie address per controller, many devices per controller)

    The virtio SCSI one seems the most interesting since it claims "Standard device naming". Documentation: https://www.ovirt.org/develop/release-management/features/storage/virtio-scsi.html

     

    If you think this is better reported in the virtio git issues. I'll move it there.

     

    Edit 1:

    Forgot to add:

    They are not detected at all during the install wizard, i'm going to do more testing to confirm they aren't detected at all awell in pool creation.

     

    Edit 2:

    Well i was completely wrong DSM seems to detect them both at setup time and later.

    It even updates the disks listing in realtime matching what i add or remove from proxmox.

    I'm going to go touch grass now.

    Earlier it wasn't doing either so i guess i had the wrong chipset? (current i440fx).

    Edit 3:
    The chipset wasn't it.
    It's working aswell with Q35 now and the diskmap is shifted as others pointed out.

    I guess the problem solved itself or i was doing something wrong

    I'm pretty sure earlier thought that i did run the VM multiple times with atleast one disk attached via virtio SCSI and the setup page complained that there where no disks available (i'm currently testing with 32GB virtual drives).

    On any case. Thanks again for your work. :D

  5. So there's something new that i wasn't here the first time it happened.

     

    From the storage pool log: "Disk overheat: Disk [Cache device 1] had reached 70°C, shutdown system now."

    This would really seem the culprit right?.

     

    It doesn't make sense for the drive for getting this hot though.

    I'll prod in the case a bit maybe it really is getting this hot, but it shouldn't it is a well ventilated case.

    Could it be that nas is ready the temperatures invalid?

     

  6. I've removed the cache and then updated for being safe.

    The update went fine.

    I've recreated the cache again (i didn't seem to need to reinstall the nvme patch as i could already select the drives in the storage manager).

    Aaaaaand the moment i enabled the option after like 5 minutes it shutdown again.

     

    I'm not sure reinstalling will solve anything, as i've never used SSH for anything other than interacting with docker and installing the patch.

    If you think it's going to help i'll try.

    Any other log file you think i could check?

  7. Hello.

    I'm currently running baremetal DSM and it's working great. Here's some details about my setup:

    • Baremetal install
    • DSM Ver 6.2.3-25426
    • Jun's v1.04 loader
    • extra.lzma
    • nvme patch

     

    Since i wanted to have better performance when working with the nas i've decided to add a read-write cache using 2 nvme drives.

    Both drives are properly detected and i can create and attach said cache to the drive pool just fine and the system is stable.

     

    I wanted to also then enable the "sequential IO" option so that large file transfers hit the NVME write cache as well.

    If i do that everything seems to work as intended at first. I can transfer at much higher than HDD speeds.

    Howhever it seems that after 10 minutes or so (haven't timed it) be it that the system is IDLE or that i'm doing something the machine just shuts off.

     

    I've tried checking the /var/log/messages and there doesn't seem anything that stands out as to what the problem could be.

    If any of you have ideas on other things i could try let me know.

     

    Here's part of the log file before the nas shutdowns:

     

    Nas synoddsmd: utils.cpp:35 Fail to get synology account
    Nas synoddsmd: user.cpp:129 get account info fail [100]
    Nas synoddsmd: synoddsm-hostd.cpp:227 Fail to get DDSM licenses, errCode: 0x100
    Nas synosnmpcd: snmp_get_client_data.cpp:150 Align history time success
    Nas synopoweroff: system_sys_init.c:95 synopoweroff: System is going to poweroff
    Nas [  972.515703] init: synonetd main process (6584) killed by TERM signal
    Nas [  972.519283] init: synostoraged main process (12775) terminated with status 15
    Nas [  972.524128] init: hotplugd main process (13271) killed by TERM signal
    Nas [  972.524501] init: smbd main process (14435) killed by TERM signal
    Nas synodisklatencyd: synodisklatencyd.cpp:659 Stop disk latency monitor daemon by SIGTERM
    Nas syno_poweroff_task: System is acting poweroff.

     

×
×
  • Create New...