Jump to content
XPEnology Community

bateau

Member
  • Posts

    49
  • Joined

  • Last visited

Everything posted by bateau

  1. I apologize for not reading all of this thread, but has anyone attempted migration from VM with Jun’s loader and DSM 6 to redpill? I’m keen to experiment with red pill in a separate VM from my “production”, but eventually would need to migrate disk array. I suppose it may be as simple as pointing redpill VM at the disks and letting it “adopt” then the way DSM does when you switch between native Synology products.
  2. @Polanskiman, when using downgrade method described by @sbv3000 I end up with dowgraded DSM, but the configuration is all gone. Presumably since spare drive is a clean install, then it's used to replace system partition of the other drives. Am I missing something about how to retain DSM configuration prior to botched upgrade?
  3. +1 to I wish I had seen this thread. 6.2.3-Update 3 fails to connect to network on ESXi with LSI controller in pass-through. Rolling back to Update 2.
  4. I was a dummy and managed to kill my XPE install on ESXi with 6.2.3 Update 3. Could anyone educate me how to roll it back or update bootloader image to bring the machine back? - DSM version prior update: DSM 6.2.3 25426 Update 2 - Loader version and model: JUN'S LOADER v1.03b - DS3617xs - Using custom extra.lzma: NO - Installation type: VM - ESXi 6.7 Serial log below: Hunk #1 succeeded at 184 (offset 13 lines). patching file etc/synoinfo.conf Hunk #1 FAILED at 261. 1 out of 1 hunk FAILED -- saving rejects to file etc/synoinfo.conf.rej patching file linuxrc.syno Hunk #1 FAILED at 39. Hunk #2 succeeded at 131 (offset 32 lines). Hunk #3 succeeded at 694 (offset 179 lines). 1 out of 3 hunks FAILED -- saving rejects to file linuxrc.syno.rej patching file usr/sbin/init.post cat: can't open '/etc/synoinfo_override.conf': No such file or directory START /linuxrc.syno Insert Marvell 1475 SATA controller driver Insert basic USB modules... :: Loading module usb-common ... [ OK ] :: Loading module usbcore ... [ OK ] :: Loading module ehci-hcd ... [ OK ] :: Loading module ehci-pci ... [ OK ] :: Loading module ohci-hcd ... [ OK ] :: Loading module uhci-hcd ... [ OK ] :: Loading module xhci-hcd ... [ OK ] :: Loading module usb-storage ... [ OK ] :: Loading module BusLogic ... [ OK ] :: Loading module vmw_pvscsi ... [ OK ] :: Loading module megaraid_mm ... [ OK ] :: Loading module megaraid_mbox ... [ OK ] :: Loading module megaraid ... [ OK ] :: Loading module scsi_transport_spi ... [ OK ] :: Loading module mptbase ... [ OK ] :: Loading module mptscsih ... [ OK ] :: Loading module mptspi ... [ OK ] :: Loading module mptsas ... [ OK ] :: Loading module mptctl ... [ OK ] :: Loading module megaraid_sas ... [ OK ] :: Loading module mpt2sas[ 2.113257] BUG: unable to handle kernel paging request at ffffffffa020764c [ 2.114024] IP: [<ffffffff81341123>] sd_probe+0x303/0xab0 [ 2.114347] PGD 1810067 PUD 1811063 PMD 231251067 PTE 0 [ 2.114442] Oops: 0002 [#1] SMP [ 2.114455] Modules linked in: mpt2sas(O+) megaraid_sas(F) mptctl(F) mptsas(F) mptspi(F) mptscsih(F) mptbase(F) scsi_transport_spi(F) megaraid(F) megaraid_mbox(F) megaraid_mm(F) vmw_pvscsi(F) BusLogic(F) usb_storage xhci_hcd uhci_hcd ohci_hcd(F) ehci_pci(F) ehci_hcd(F) usbcore usb_common mv14xx(O) cepsw(OF) [ 2.114467] CPU: 0 PID: 4009 Comm: insmod Tainted: GF O 3.10.105 #25426 [ 2.114480] Hardware name: VMware, Inc. VMware Virtual Platform/440BX Desktop Reference Platform, BIOS 6.00 12/12/2018 [ 2.114493] task: ffff880234c8c800 ti: ffff880231054000 task.ti: ffff880231054000 [ 2.114506] RIP: 0010:[<ffffffff81341123>] [<ffffffff81341123>] sd_probe+0x303/0xab0 [ 2.114518] RSP: 0018:ffff8802310577e0 EFLAGS: 00010202 [ 2.114531] RAX: ffffffffa0202600 RBX: ffff8802311b1970 RCX: 0000000000000001 [ 2.114544] RDX: ffffffff8132eca0 RSI: ffffffff8177d13b RDI: ffff880232f56000 [ 2.114557] RBP: ffff8802312f2800 R08: ffffffff810889e0 R09: 0000000000000000 [ 2.114569] R10: ffff880232d38ac0 R11: 0000000045fa1bf3 R12: 00000000fffffff4 [ 2.114582] R13: 0000000000000007 R14: 0000000000000000 R15: 0000000000000007 [ 2.114595] FS: 00007fcea045f700(0000) GS:ffff88023fc00000(0000) knlGS:0000000000000000 [ 2.114608] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 [ 2.114620] CR2: ffffffffa020764c CR3: 000000023103c000 CR4: 00000000001607f0 [ 2.114633] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 [ 2.114646] DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400 [ 2.114659] Stack: [ 2.114671] ffff880232db4000 ffff8802312f9898 0000000000000000 ffff880200000000 [ 2.114684] ffff8802311b1970 ffff8802311b1970 ffffffff8184bdc8 0000000000000006 [ 2.114697] ffff880230bfdc28 ffff8802310579f8 ffffffffa0000e97 ffff8802311b1970 [ 2.114710] Call Trace: [ 2.114728] [<ffffffffa0000e97>] ? SYNOSATADiskLedCtrl+0xd87/0x1cd5 [cepsw] [ 2.114746] [<ffffffff81307e8a>] ? really_probe+0x5a/0x220 [ 2.114764] [<ffffffff81308050>] ? really_probe+0x220/0x220 [ 2.114781] [<ffffffff81306223>] ? bus_for_each_drv+0x53/0x90 [ 2.114799] [<ffffffff81307e18>] ? device_attach+0x88/0xa0 [ 2.114816] [<ffffffff81307388>] ? bus_probe_device+0x88/0xb0 [ 2.114835] [<ffffffff8130578f>] ? device_add+0x5cf/0x6a0 [ 2.114853] [<ffffffff813267ca>] ? scsi_sysfs_add_sdev+0x6a/0x2c0 [ 2.114872] [<ffffffff81324323>] ? scsi_probe_and_add_lun+0xee3/0xf70 [ 2.114890] [<ffffffff813230b6>] ? scsi_alloc_target+0x276/0x310 [ 2.114909] [<ffffffff813246dd>] ? __scsi_scan_target+0x8d/0x5d0 [ 2.114927] [<ffffffff81305a8f>] ? device_create+0x2f/0x40 [ 2.114944] [<ffffffff81324cf3>] ? scsi_scan_target+0xd3/0xe0 [ 2.114960] [<ffffffff8133113f>] ? sas_rphy_add+0x10f/0x170 [ 2.114984] [<ffffffffa01e80e1>] ? mpt2sas_transport_port_add+0x321/0xc40 [mpt2sas] [ 2.115005] [<ffffffffa01d9d15>] ? _scsih_scan_finished+0x1f5/0x2b0 [mpt2sas] [ 2.115022] [<ffffffff81324fc7>] ? do_scsi_scan_host+0x67/0x80 [ 2.115042] [<ffffffffa01d8d87>] ? _scsih_probe+0x4b7/0x8c0 [mpt2sas] [ 2.115060] [<ffffffff8129d250>] ? pci_device_probe+0x60/0xa0 [ 2.115076] [<ffffffff81307e8a>] ? really_probe+0x5a/0x220 [ 2.115093] [<ffffffff81308111>] ? __driver_attach+0x81/0x90 [ 2.115111] [<ffffffff81308090>] ? __device_attach+0x40/0x40 [ 2.115127] [<ffffffff81306183>] ? bus_for_each_dev+0x53/0x90 [ 2.115144] [<ffffffff81307628>] ? bus_add_driver+0x158/0x250 [ 2.115161] [<ffffffffa0208000>] ? 0xffffffffa0207fff [ 2.115178] [<ffffffff81308718>] ? driver_register+0x68/0x150 [ 2.115195] [<ffffffffa0208000>] ? 0xffffffffa0207fff [ 2.115215] [<ffffffffa02081f0>] ? _scsih_init+0x1f0/0x21c [mpt2sas] [ 2.115233] [<ffffffff810003aa>] ? do_one_initcall+0xea/0x140 [ 2.115250] [<ffffffff8108be04>] ? load_module+0x1a04/0x2120 [ 2.115267] [<ffffffff81088fc0>] ? store_uevent+0x40/0x40 [ 2.115283] [<ffffffff8108c5b1>] ? SYSC_init_module+0x91/0xc0 [ 2.115303] [<ffffffff814c5dc4>] ? system_call_fastpath+0x22/0x27 [ 2.115316] Code: 01 00 00 8b 8b 14 ff ff ff 8b 93 10 ff ff ff 8b b3 18 ff ff ff ff d0 41 89 c5 41 89 c7 48 8b 83 90 fe ff ff 48 8b 80 10 05 00 00 <44> 89 a8 4c 50 00 00 e9 21 fe ff ff 90 83 3d 19 af 62 00 01 41 [ 2.115329] RIP [<ffffffff81341123>] sd_probe+0x303/0xab0 [ 2.115342] RSP <ffff8802310577e0> [ 2.115354] CR2: ffffffffa020764c [ 2.115367] ---[ end trace 6badd467ebda8a0a ]--- ... [FAILED] :: Loading module mpt3sas[ 2.119980] mpt3sas version 22.00.02.00 loaded [ 3.106657] usb 1-1: new full-speed USB device number 2 using xhci_hcd [ 3.118890] Got empty serial number. Generate serial number from product.
  5. I was a dummy and managed to kill my XPE install on ESXi with 6.2.3 Update 3. Could anyone educate me how to roll it back or update bootloader image to bring the machine back? - DSM version prior update: DSM 6.2.3 25426 Update 2 - Loader version and model: JUN'S LOADER v1.03b - DS3617xs - Using custom extra.lzma: NO - Installation type: VM - ESXi 6.7 Serial log below: Hunk #1 succeeded at 184 (offset 13 lines). patching file etc/synoinfo.conf Hunk #1 FAILED at 261. 1 out of 1 hunk FAILED -- saving rejects to file etc/synoinfo.conf.rej patching file linuxrc.syno Hunk #1 FAILED at 39. Hunk #2 succeeded at 131 (offset 32 lines). Hunk #3 succeeded at 694 (offset 179 lines). 1 out of 3 hunks FAILED -- saving rejects to file linuxrc.syno.rej patching file usr/sbin/init.post cat: can't open '/etc/synoinfo_override.conf': No such file or directory START /linuxrc.syno Insert Marvell 1475 SATA controller driver Insert basic USB modules... :: Loading module usb-common ... [ OK ] :: Loading module usbcore ... [ OK ] :: Loading module ehci-hcd ... [ OK ] :: Loading module ehci-pci ... [ OK ] :: Loading module ohci-hcd ... [ OK ] :: Loading module uhci-hcd ... [ OK ] :: Loading module xhci-hcd ... [ OK ] :: Loading module usb-storage ... [ OK ] :: Loading module BusLogic ... [ OK ] :: Loading module vmw_pvscsi ... [ OK ] :: Loading module megaraid_mm ... [ OK ] :: Loading module megaraid_mbox ... [ OK ] :: Loading module megaraid ... [ OK ] :: Loading module scsi_transport_spi ... [ OK ] :: Loading module mptbase ... [ OK ] :: Loading module mptscsih ... [ OK ] :: Loading module mptspi ... [ OK ] :: Loading module mptsas ... [ OK ] :: Loading module mptctl ... [ OK ] :: Loading module megaraid_sas ... [ OK ] :: Loading module mpt2sas[ 2.113257] BUG: unable to handle kernel paging request at ffffffffa020764c [ 2.114024] IP: [<ffffffff81341123>] sd_probe+0x303/0xab0 [ 2.114347] PGD 1810067 PUD 1811063 PMD 231251067 PTE 0 [ 2.114442] Oops: 0002 [#1] SMP [ 2.114455] Modules linked in: mpt2sas(O+) megaraid_sas(F) mptctl(F) mptsas(F) mptspi(F) mptscsih(F) mptbase(F) scsi_transport_spi(F) megaraid(F) megaraid_mbox(F) megaraid_mm(F) vmw_pvscsi(F) BusLogic(F) usb_storage xhci_hcd uhci_hcd ohci_hcd(F) ehci_pci(F) ehci_hcd(F) usbcore usb_common mv14xx(O) cepsw(OF) [ 2.114467] CPU: 0 PID: 4009 Comm: insmod Tainted: GF O 3.10.105 #25426 [ 2.114480] Hardware name: VMware, Inc. VMware Virtual Platform/440BX Desktop Reference Platform, BIOS 6.00 12/12/2018 [ 2.114493] task: ffff880234c8c800 ti: ffff880231054000 task.ti: ffff880231054000 [ 2.114506] RIP: 0010:[<ffffffff81341123>] [<ffffffff81341123>] sd_probe+0x303/0xab0 [ 2.114518] RSP: 0018:ffff8802310577e0 EFLAGS: 00010202 [ 2.114531] RAX: ffffffffa0202600 RBX: ffff8802311b1970 RCX: 0000000000000001 [ 2.114544] RDX: ffffffff8132eca0 RSI: ffffffff8177d13b RDI: ffff880232f56000 [ 2.114557] RBP: ffff8802312f2800 R08: ffffffff810889e0 R09: 0000000000000000 [ 2.114569] R10: ffff880232d38ac0 R11: 0000000045fa1bf3 R12: 00000000fffffff4 [ 2.114582] R13: 0000000000000007 R14: 0000000000000000 R15: 0000000000000007 [ 2.114595] FS: 00007fcea045f700(0000) GS:ffff88023fc00000(0000) knlGS:0000000000000000 [ 2.114608] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 [ 2.114620] CR2: ffffffffa020764c CR3: 000000023103c000 CR4: 00000000001607f0 [ 2.114633] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 [ 2.114646] DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400 [ 2.114659] Stack: [ 2.114671] ffff880232db4000 ffff8802312f9898 0000000000000000 ffff880200000000 [ 2.114684] ffff8802311b1970 ffff8802311b1970 ffffffff8184bdc8 0000000000000006 [ 2.114697] ffff880230bfdc28 ffff8802310579f8 ffffffffa0000e97 ffff8802311b1970 [ 2.114710] Call Trace: [ 2.114728] [<ffffffffa0000e97>] ? SYNOSATADiskLedCtrl+0xd87/0x1cd5 [cepsw] [ 2.114746] [<ffffffff81307e8a>] ? really_probe+0x5a/0x220 [ 2.114764] [<ffffffff81308050>] ? really_probe+0x220/0x220 [ 2.114781] [<ffffffff81306223>] ? bus_for_each_drv+0x53/0x90 [ 2.114799] [<ffffffff81307e18>] ? device_attach+0x88/0xa0 [ 2.114816] [<ffffffff81307388>] ? bus_probe_device+0x88/0xb0 [ 2.114835] [<ffffffff8130578f>] ? device_add+0x5cf/0x6a0 [ 2.114853] [<ffffffff813267ca>] ? scsi_sysfs_add_sdev+0x6a/0x2c0 [ 2.114872] [<ffffffff81324323>] ? scsi_probe_and_add_lun+0xee3/0xf70 [ 2.114890] [<ffffffff813230b6>] ? scsi_alloc_target+0x276/0x310 [ 2.114909] [<ffffffff813246dd>] ? __scsi_scan_target+0x8d/0x5d0 [ 2.114927] [<ffffffff81305a8f>] ? device_create+0x2f/0x40 [ 2.114944] [<ffffffff81324cf3>] ? scsi_scan_target+0xd3/0xe0 [ 2.114960] [<ffffffff8133113f>] ? sas_rphy_add+0x10f/0x170 [ 2.114984] [<ffffffffa01e80e1>] ? mpt2sas_transport_port_add+0x321/0xc40 [mpt2sas] [ 2.115005] [<ffffffffa01d9d15>] ? _scsih_scan_finished+0x1f5/0x2b0 [mpt2sas] [ 2.115022] [<ffffffff81324fc7>] ? do_scsi_scan_host+0x67/0x80 [ 2.115042] [<ffffffffa01d8d87>] ? _scsih_probe+0x4b7/0x8c0 [mpt2sas] [ 2.115060] [<ffffffff8129d250>] ? pci_device_probe+0x60/0xa0 [ 2.115076] [<ffffffff81307e8a>] ? really_probe+0x5a/0x220 [ 2.115093] [<ffffffff81308111>] ? __driver_attach+0x81/0x90 [ 2.115111] [<ffffffff81308090>] ? __device_attach+0x40/0x40 [ 2.115127] [<ffffffff81306183>] ? bus_for_each_dev+0x53/0x90 [ 2.115144] [<ffffffff81307628>] ? bus_add_driver+0x158/0x250 [ 2.115161] [<ffffffffa0208000>] ? 0xffffffffa0207fff [ 2.115178] [<ffffffff81308718>] ? driver_register+0x68/0x150 [ 2.115195] [<ffffffffa0208000>] ? 0xffffffffa0207fff [ 2.115215] [<ffffffffa02081f0>] ? _scsih_init+0x1f0/0x21c [mpt2sas] [ 2.115233] [<ffffffff810003aa>] ? do_one_initcall+0xea/0x140 [ 2.115250] [<ffffffff8108be04>] ? load_module+0x1a04/0x2120 [ 2.115267] [<ffffffff81088fc0>] ? store_uevent+0x40/0x40 [ 2.115283] [<ffffffff8108c5b1>] ? SYSC_init_module+0x91/0xc0 [ 2.115303] [<ffffffff814c5dc4>] ? system_call_fastpath+0x22/0x27 [ 2.115316] Code: 01 00 00 8b 8b 14 ff ff ff 8b 93 10 ff ff ff 8b b3 18 ff ff ff ff d0 41 89 c5 41 89 c7 48 8b 83 90 fe ff ff 48 8b 80 10 05 00 00 <44> 89 a8 4c 50 00 00 e9 21 fe ff ff 90 83 3d 19 af 62 00 01 41 [ 2.115329] RIP [<ffffffff81341123>] sd_probe+0x303/0xab0 [ 2.115342] RSP <ffff8802310577e0> [ 2.115354] CR2: ffffffffa020764c [ 2.115367] ---[ end trace 6badd467ebda8a0a ]--- ... [FAILED] :: Loading module mpt3sas[ 2.119980] mpt3sas version 22.00.02.00 loaded [ 3.106657] usb 1-1: new full-speed USB device number 2 using xhci_hcd [ 3.118890] Got empty serial number. Generate serial number from product.
  6. I was a dummy and managed to kill my XPE install on ESXi with 6.2.3 Update 3. Could anyone educate me how to roll it back or update bootloader image to bring the machine back? Serial log below:
  7. - Outcome of the installation/update: UNSUCCESSFUL - DSM version prior update: DSM 6.2.3 25426 Update 2 - Loader version and model: JUN'S LOADER v1.03b - DS3617xs - Using custom extra.lzma: NO - Installation type: VM - ESXi 6.7 - Additional comments: Doesn't show up on the network. ESXi doesn't show IP address requested. @flyride, may I trouble you for some assistance? As far as I recall FixSinoboot.sh was applied before.
  8. I don't think I did anything special with my ESXi host. Passed the LSI controller to XPenology and it "just works"
  9. Not a reassuring sign when https://prerelease.synology.com/ blows up with a PHP error.
  10. Hello folks. I'm setting up a new XPenology instance in ESXi using SuperMicro X10SL7-F motherboard with an 8-channel LSI 2308 SAS adapter. Which of the existing loaders/DS models is best suited for this purpose? Just as an experiment I used 1.04b/918+, which appears to have installed without a hitch. However, SAS drives don't report serial/firmware/temp in Storage Manager. From cmdline I can find their basic stats with smartctl -all. EDIT: After poking around the forum it seems that 3617XS might have better support for SAS. Trying that out and so far seems good - network works, SAS drives are showing up correctly in Storage Manager. Thank you.
  11. - Outcome of the installation/update: SUCCESSFUL - DSM version prior update: DSM 6.2.3-25426 - Loader version and model: JUN'S LOADER v1.04b - DS918+ - Using custom extra.lzma: NO - Installation type: VM - ESXi 6.7 - Additional comments: Reboot required
  12. - Outcome of the installation/update: SUCCESSFUL - DSM version prior update: DSM 6.2.3-25426 - Loader version and model: JUN'S LOADER v1.04b - DS918+ - Using custom extra.lzma: NO - Installation type: BAREMETAL - HP Z230 (Xeon E3-1245 v3 - 8GB) - Additional comments: Reboot required
  13. Thank you. That settles it, I’ll boot from USB and dedicate SSD to scratch.
  14. Right now I have ESXi installed on the SSD connected to C224 controller. Should I bother using USB drive to boot from (motherboard has USB-A connector on the board, for conveniently hiding pen drives)?
  15. Would you mind educating me on the concept of scratch disks for ESXi? I have a ton of reading to do on ESXi, but as I understand things so far, I would use a SSD attached to C224 SATA for VM storage (my Linux VMs, synoboot, etc). I would pass through LSI to Xpenology and let DSM own the 8 drives attached to it. The chassis has room for 1 2.5" drive, which ought to be plenty for VM partitions.
  16. @flyride, at the moment I’m running HP Z230 with E3-1245v3. 1.04b loader and no custom lzma. Latest DSM version. New (to me) hardware is SuperMicro X10SLM-F, E3-1265L v3, LSI 9211-8i HBA in a U-NAS 810A chassis so I can have 8 drives and option of SAS.
  17. Thank you @flyride. I was reading your ESXi fix thread. I don't plan to use NVME cache or transcoding via DSM (Plex Server is on nVidia Shield at the moment). Could you recommend a guide for migrating from baremetal to hypervisor? I've been scouring the forum and it seems like that's possible to do with drive migration.
  18. Good morning, everyone. I'm in the process of upgrading my XPenology hardware and I wanted to understand pros/cons of using ESXi vs bare metal. Right now I have a bare metal installation and I use VM Manager to run several VMs. The main pro of moving to ESXi would be decoupling those VMs from XPenology and have them available independent of NAS state. The con I found so far is the different boot process and synoboot fix-up that's needed for ESXi. Are there any other considerations I should be aware of? The hardware in the new box isn't significantly different from the old one. I'm just moving to 8-bay chassis with LSI HBA rather than HP tower I was using before. Still running E3-1245v3 as CPU and 16GB RAM (can be expanded, that's just what I have).
  19. @SteinerKD One more question if I may. Do you have any experience with mixing SATA and SAS drives on the LSI controllers? Specifically, running 4xSATA on one cable (channel? Not sure of terminology) and 4xSAS on the other.
  20. What kind of fan did you attach? The UNAS 810A enclosure I'm using is pretty restrictive. There's a pair of 120mm fans blowing over the drives. CPU will have Noctua NH-L9i on it and there's a 60mm fan venting the motherboard compartment. 9211 appears to have 40mm heatsink on it, so I'll do some measurements to see if I can fit 40mm Noctua fan on top of the heatsink.
  21. Thank you for the reply. Was there anything special you needed to do for XPE to support 9702, or did it work out of the box? I haven't gotten mine up and running yet as I'm waiting on some hardware to come in.
  22. @IG-88 are there other low-power options with a little more grunt than J5005 but not as much power as Haswell Xeon or i7
  23. Thank you for feedback everyone. I got my hands on a U-NAS NSC-810A chassis and LSI 9211-8i card in IT mode. Now the big decision is whether I set up U-NAS as a JBOD expansion and keep my Z230 chassis or go buy a mATX Haswell board and transplant the entire setup into U-NAS. Does anyone have recommendation for least problematic Haswell mATX motherboard capable of supporting i7-4790 or E3-1245 v3. Both are massive overkills for a NAS, but since I already have the parts...
  24. @IG-88 and @CreerNLD, thank you for chiming in. I will look into LSI 9211-8i and threads referring to it. Does it work out of the box or does it need firmware flashing? I am trying to keep the costs low, so that's why I'm trying to repurpose an existing chassis. A did a very quick google search and it seems that Z230 motherboard is proprietary enough that it doesn't easily port into different chassis. It seems that Silverstone DS380 is a popular mini-ITX NAS choice. I would need to find a mini-ITX motherboard to fit E3-1245v3 that's in the Z230, otherwise it's a pretty much new build driving up costs.
×
×
  • Create New...