Jump to content
XPEnology Community

tcs

Member
  • Posts

    39
  • Joined

  • Last visited

  • Days Won

    1

tcs last won the day on October 9 2021

tcs had the most liked content!

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

tcs's Achievements

Junior Member

Junior Member (2/7)

7

Reputation

  1. If you want to have >26 drives there are really two key things that need to happen. On *INITIAL INSTALL*, you need to have "maxdisks" set to less than 26, I would suggest just doing 12. Once you have gotten through the initial install and first full boot, you then go back and modify maxdisks to a larger number and create your volumes/raid arrays. This is because on first boot, DSM takes a slice off every disk to create a RAID-1 array for DSM itself to live on. Whatever binary they use to create this partition is passed the "maxdisks" variable, and if that variable is >26 the binary will crash. After a system has been installed this script/binary is never called again that I've seen unless you're trying to do a "controller swap" - IE If you went from a 3615 to 3617 it would be called again.
  2. I ended up switching to the 3615 image, I could not get the 918 image to properly see the drives.
  3. After you do redpill run, you go into redpill-load and edit bundled-exts.json to add whatever extensions you need. If you need more guidance than that, you should probably hold off running this until it's in a more stable state.
  4. can you ping github from within the container? What happens when you manually curl the file from inside the container?
  5. Do you have a firewall running on the box you have the container on? That’s telling you it can’t make an https connection to GitHub. Can you get to the address from a web browser? Either you’re in a country that is blocking GitHub entirely or something local to your system is doing it.
  6. Then they shouldn't be using redpill. It's not considered stable yet, or in any way ready for general consumption. If someone isn't capable of building a bootloader they should wait until the final version is ready for public consumption.
  7. Just for giggles I ran this for you, not on an HP box, but a 3615 image 7.0.1-42218 on a supermicro X10SAE with an E3-1225 v3. It ran for an hour before I killed it, no issues other than it complaining about cache after filling up the disk which I expect is normal.
  8. What is your maxdisk count set to? How many drives are in the system? And did you double check your USB drive ID?
  9. Understand, that's why I @'d you both. I foresee endless requests for help by people adding drivers if those sizes remain the way they are. If the end goal is: usable by everyone, the manual steps listed above probably won't cut it. If the intended audience is the same as the beta: you understand enough coding and linux to unwind problems yourself, then it's not an issue.
  10. I think the broader point being that with the size of the current drivers being put out, that volume needs to be larger by default unless there's some reason I'm unaware of that it's stuck at 40MB. Probably wouldn't be a bad idea to make it a flag or part of the config for the docker script @haydibe @ThorGroup. For instance, I can tell you with the mptsas drivers alone that pretty much blows up all the space for drivers by themselves. If you need mptsas and mpt2sas or mpt3sas and the 42218 build you're dead in the water without a larger partition.
  11. Definitely an issue with 918+ with SAS adapters on bare metal issue. Switched to 3615 image and it happily reads smart data. The only outstanding issue with 3615 with SAS on bare metal is - I do see this being spammed in scemd.log: scemd[19875]: space_internal_lib.c:1403 Get value of sys_adjust_groups failed space_disk_sys_adjust_select_create.c:600 Failed to generate new system disk list Haven't spent any time to dig further but it otherwise seems happy.
  12. It looks like syno is trying to attach a marvell driver to the drives despite them being on an mpt2sas controller: cat /var/log/scemd.log 2021-10-11T16:37:24-05:00 host01 scemd[21169]: disk/disk_is_mv1475_driver.c:71 Can't get sata chip name from pattern /sys/block/sdx/device/../../scsi_host/host*/proc_name Anyone else running bare metal or passthrough LSI with the 918+ and 42218 able to check and see if you have the same errors?
  13. Anyone run into this one? For some reason it seems to think a bunch of disks are ineligible for creating a new pool, even though the drives are perfectly healthy. I feel like I've run into this before but don't remember what was causing it. The "disk_reason_template_0" is rather humorous.
  14. I would second this. It’s relatively easy to get a 918 build working if you understand json and can read compiler errors. That being said if you can’t, just stop trying and be patient. @Orphée - your 918 repo needed the model changed to 918p and to be pointed at the virtIO build for 4.4.180plus if you care.
  15. Also a quick note on this front: it only takes two of the SAS drivers to run out of space with the 918+ build, not sure if @pocopico did something wrong building them or if they're really just that big - they're significantly larger than their 3615 counterparts. The 4.4.180plus mptsas driver alone is 6.76MB: https://github.com/pocopico/rp-ext/blob/main/mptsas/releases/mptsas-4.4.180plus.tgz
×
×
  • Create New...