Jump to content
XPEnology Community

Thecus N7700Pro (version 1) - anybody done this?


sstillwell

Recommended Posts

I've had no problems setting up Xpenology on a VMware Fusion VM using a little 8GB thumb drive, although making sure the drive is connected from the host before booting is kinda finicky....and even when I download the MBR version of Jun's 1.02b boot loader, it doesn't seem to boot unless the VM is in UEFI mode - that's worrying...I don't think the N7700PRO will have a UEFI option in BIOS.

 

At any rate, I have a fully-loaded Thecus N7700PRO (the older 32-bit OS version) with Core Duo processor, 4 GB RAM (3 usable) and 7x 2 TB Seagate HDD.  I've ordered a cheap video card to slap in the PCIe slot that normally holds the optional 10 Gb Ethernet card.  Also, if this pans out, I have a LSI 9201-16e card that I could throw in that same slot to give me more SATA ports (currently it's in my ESXi box passed through to a Napp-IT VM that's currently giving me NFS storage for the VMs.)  I enjoyed setting that up, but I'd like to get the 16GB RAM allocated to that VM back for other purposes so moving the VMs back to external storage sounds like my best bet.

 

Has anyone used the N7700PRO for Xpenology?  Some googling leads me to think that I can simply pull the DOMs out and it will automagically boot from USB...true?

 

Has anyone used the LSI9201 with Xpenology?

 

I'll undoubtedly have more questions as I go along, but any pointers to good information would be appreciated.  I've spent most of today googling this and I don't think I'm much farther ahead than when I started.

 

Thanks in advance,

Link to comment
Share on other sites

I've not set up a Thecus with XPE/DSM but I think you are on the right track

1) Connect monitor/keyboard and if possible serial port (I recall most Thecus boards have a serial header)

2) Boot and see what default video output and bios options you get

3) Remove the DOM and see how the boot process goes vs 2)

4) Try a test boot with XPE/DSM5.2, thats less fiddly to setup and has more built in drivers/modules

5) Check the hardware PCI devices, SATA, NIC etc - lspci etc - you have a list of devices for debugging

6) Test boot with your 1.02b loader (3615 recommended)

7) Debug :)

 

Things that might be problematic - Onboard SATA vs add on controllers and disk numbering vs physical slots. Port multipler being used. Disk activity lights not working. Enhanced extra.lzma file from @IG-88 needed for controllers, NIC.

 

Link to comment
Share on other sites

6 hours ago, sbv3000 said:

Port multipler being used.

 

 

The chassis that I use with the LSI card is a straight SATA external chassis - no port expanders involved.  8 drive bays on 2x 4-channel SFF-8088 to SFF-8088 SATA cables coming from the LSI card to the chassis.  I've got 4x WD RED 8TB drives and 2x WD RED 4TB drives in that.  Video card should be here tomorrow (PCIe 8x video cards aren't an off-the-shelf purchase at any of my local stores, unfortunately) and I'll try to start tinkering with it.

 

If I light up all the ports eventually and get DSM to recognize them, that gives me a total of 23 drives - I think I've read that people have pushed units as far as 24 before...so I'll be cautiously hopeful. :)

Link to comment
Share on other sites

Well, that was rather anticlimactic... :)

 

Got the video card, put it in along with a USB keyboard and mouse.  Booted right up, but as expected there was no way to get it to boot from USB while the DOMs were still installed.  Disassembled the unit a little further to get at the DOMs and pulled those.  Buttoned it back up and it booted straight into Jun's bootloader 1.02b for 6.xx.  It came right up and was found in find.synology.com.  When I started setup it saw all seven drives. 

 

I've set up RAID6 on it and a couple of shares to test.  While it's still scrubbing the disks it STILL goes about 2.5x as fast as the Thecus did for SMB copies - it was peaking at 160 MB/sec on the two GigE connectors in a bonded pair.

 

The only downside I've seen so far is that scrubbing the array is taking about 40-50% CPU in Resource Monitor, so I'm paying the price for using older/slower hardware.

 

I'm going to let it finish scrubbing the array and test rebooting and file copies rather extensively before firing shots in anger, but so far, so good!

 

Link to comment
Share on other sites

Looking really good so far.  Array has been scrubbed, now have about 9.1TB available to play with (7 x 2TB drives in RAID6).  Copying either SMB or AFP gets me approximately to line saturation (DSM says between 107-120 MB/sec in the resource monitor widget).  Writing such a load to the DSM gives me varying CPU loads depending on whether the share has Advanced Integrity Protection and Compression turned on or not, but usually under 50% CPU.  Reading gives me far less load - maybe 15% CPU.  It outperforms the original Thecus OS by a substantial margin...but to be fair the Thecus OS is many years old and is running a MUCH older version of the Samba and Netatalk stacks, totally aside from the kernel version.  Disk utilization is fine - with Gigabit Ethernet I can't push the drives hard enough to matter.

 

Next step is probably to load the Synology VAAI plugin onto my VMware hosts.  At some point I'll probably want to test the LSI SATA card in there, and maybe upgrade the processor to a T6700 Core 2 Duo, but for basic file sharing it's pretty much a done deal aside from more testing.

 

Nifty!

Link to comment
Share on other sites

  • 5 months later...

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...