My old 802.11N access point started having problems recently, so I replaced it with a D-Link DAP-1360.
I was pleasantly surprised to find a remote syslog support feature, but was saddened when that feature seemingly disappeared after upgrading to the latest firmware (2.11)
A little searching revealed a very neat dlink support tool, they have a gui emulator!!
Using the emulator I was able to find the old page name for the syslog config, and tried it out on my new AP.
Plugged in the syslog server IP, hit save, and success!
I needed to get Xwindows setup to run the Oracle gui installer remotely. The 'x windows group' in redhat is a little more than what I wanted or needed, so I set out to find the minimal number of packages required to get the job done.
Here's all you need:
yum install xorg-x11-xauth xorg-x11-xinit xorg-x11-deprecated-libs libXtst
and you can install xterm for good measure if you wish.
A year and a half later and it's still chugging along quite happily. Now that the official FreeNAS 0.7 release is available I'm tempted to try upgrading, but I haven't come across any major bugs that affect me, so it's probably not worth the trouble.
Here's a look back at my disk space usage over the last year. If you recall, this is a 2 TB volume.
I will certainly run out of space before the end of 2010 if I don't start archiving or expand the storage pool. Unfortunately I haven't seen any good disk enclosures out there that support 4-5 drives over usb or esata. So I think I will be burning some dvd's here soon.
I finally decided to replace my trusty xbox running xbmc with something capable of HD playback. The acer revo is a very slick little box with the nvidia ION chipset which is now supported by the VDPAU renderer in xmbc. At only $200 this is a pretty darn good deal.
Setup was incredibly simple, from unboxing to playing glorious 720p content in less than an hour. There are plenty of detailed instructions out there on the web, but I'll recap what I did:
1. download the xbmc-live iso
2. use unetbootin to transfer the iso to an sd card
3. insert sd card, power on, hit F12 to get to boot menu, boot from sd card
4. select install xbmc to local disk
5. set video render to vdpau
6. set both audio outputs to hdmi
There's a single screw on the bottom that you need to undo before you can open up the case. After I installed the additional memory, I went into the bios settings and bumped the iGPU Frame Buffer Size up to 512MB.
Once installed, it takes about 20 seconds from power on before it's ready to go. Not quite as fast as the xbox, but still perfectly acceptable. The new default skin, 'Confluence' is pretty slick. I still haven't figured out how to get at all the old menu's that I was used to with ProjectMayhemIII, but it's still growing on me. The new libraray features are pretty rad, and indexing is sooooo much faster now that I just have it scan for new content at startup. I always hated having to manually tell it to recheck my movies folder.
Video playback is excellent, I threw all the HD content I had at it and didn't notice any slowdowns or stuttering. Finally getting to see the 'Planet Earth' series on the big screen was totally worth it! I definitely need to grab some 1080p content to test out.
I plan to figure out how to get the old xbox remote to work with the new system, it's a much simpler remote and half the buttons on the new one don't do anything anyway!
The one last thing that I may try to do is utilize the empty mini pci-e card slot inside. It seems that I could install a small SSD and take the 160GB drive and use it for something else.
Update (Jan 5th): For those of you wondering abou 1080p content, I watched an h264 rip of Star Trek at that resolution, and playback was flawless.
I'm going to make an effort to post more on here in 2010, so here's a start:
I always wondered why sometimes ssh connections from my ubuntu box would seem to take forever, but putty would prompt me for my password almost immediately.
As I discovered today there is an authentication method called GSSAPI used for Kerberos that my ssh client was trying to use. It would try twice and then timeout and continue on to pubkey or keyboard auth. This is where the extra 5-10 second delay was coming from.
A quick edit to /etc/ssh/ssh_config
# GSSAPIAuthentication yes # GSSAPIDelegateCredentials no
and my ssh connections are established immediately.
That's one less frustration to deal with 🙂
I took a few panoramic shots while I was in Italy. Using the open-source app Hugin to stitch the photos together I got the following results.
I don't like how the sky is over exposed in most of them, but overall I'm pleased with the results.
Hardware all arrived on Thursday, threw it all together and got it powered up with no troubles.
At the first power on, I immediately wanted to fix the fan on the north bridge. It started off with a bit of a clicking noise, but even after that quieted down, the motor noise was still pretty loud and very high pitched. So I took that fan off, and mounted an 80mm fan to the vent on the case side panel. Much better 🙂
Just a few temperature measurements as presented in the bios, ambient was 23C.
No Northbridge Fan:
80mm Case Fan:
Perfectly reasonable temps w/o that noisy little fan involved. And the CPU seems quite happy as long as it gets a little airflow.
Haven't had much time to play with it, but I did manage to get the ubuntu 8.04 amd64 iso converted to boot off a usb device using the excellent unetbootin tool. The system felt quite snappy and I was pleased at how quickly I was able to open a few openoffice documents.
Unfortunately unetbootin didn't seem to work with any freebsd iso's I tried, which is a shame because I eventually want to run FreeNAS on this system. Might just have to break down and borrow a CD drive from another system to get it installed.
Oh, and the best part of all. According to my kill-a-watt meter, power draw is at 30W! I should have no trouble keeping this system under 45W when I get drives in it.
The first concept I had for my file server revolved around the Asus Eee Box. It met the low power requirements, was very quiet, and it's diminutive dimensions would make it perfect to tuck away. However using this box would mean that I'd be stuck with external USB drives for storage. While this would not be terrible, there would be an added cost for a disk enclosure, and performance would take a hit as well.
Then I found out that Intel sells an Atom based motherboard, complete with SATA II ports and a PCI slot. After a little further reading about the product, I decided to go with it and picked out components to round out the system.
Here's what I got:
- Intel BOXD945GCLF Atom processor Intel 945GC Mini ITX Motherboard/CPU Combo
- Intel PWLA8391GT 10/ 100/ 1000Mbps PCI PRO/1000 GT Desktop Adapter
- G.SKILL 1GB 240-Pin DDR2 SDRAM DDR2 667 (PC2 5300) Desktop Memory
- SYBA SY-IDE2MC-4B IDE to CF/MMC/MS/SD Adapter
- COOLER MASTER Centurion 541 RC-541-SKN1 MicroATX Mini Tower
- Antec earthwatts EA380 380W ATX12V v2.0 Power Supply
- 2 Transcend 2GB Secure Digital (SD) Flash Card Model TS2GSD133
I've already come in at least $30 cheaper than what the rumored Eee Box price would be, and I've got something much more flexible.
In reading about this Intel motherboard, I learned that there are some compatibility issues with linux/bsd and the integraed Realtek ethernet controller. I opted to get the Intel gigabit pci card to ensure compatibility, but also because the onboard only supports 10/100mbit.
The SYBA card adapter will take both CompactFlash or SD and fully supports DMA for speedy data transfers. I also picked up two 2GB SD cards for my trip to Italy. When I get back one of them will become the primary drive for the file server. No noise, and very low power consumption. I'm not too sure how great the SD card will work, but I don't expect any major problems.
I feel that the power supply is a bit overkill right now. But when looking for power supplies that were compatible with the Intel mobo, there are limited options that meet the 80+ efficiancy rating and include an ample number of SATA power connectors. I've alwasys liked Antec's power supplies, and I'm sure that this one will work nicely.
I haven't purchased the disk drives yet, but I will likely choose the WD Caviar GP 1TB drive. They are damn near silent, and as part of Western Digital's 'Green Power' line they've managed to reduce power consumption by a few watts.
My order should be arriving on Monday or Tuesday. Pictures and an update will follow when I've got it all together.
I've wanted to build a dedicated file server for quite some time now. I had a lot of requirements floating around in my head for what I wanted to achieve, mainly:
- 1TB+ of fault tolerant disk space - I've already got 5 disk drives in a variety of sizes holding my precious data. Losing a disk would be a tragedy. So the new storage system needs redundancies in case of a disk failure.
- ZFS - I'm a big fan of what Sun has brought to the table with ZFS. Snapshots are perfect if you've ever blown important files away with an accidental slip on the delete key. And the ability to grow a pool makes future expansion a breeze.
- Energy Efficient - My current server is my old desktop. On average it draws 165W of power, all day, every day. Worst part is, 98% of the time it's not doing a damn thing. For the new server I'd like to be using < 60W. I feel like using less electricity than a standard light bulb is a fairly good goal.
- Easy to Manage - I've seen some truly terrible management GUI's in my time, and I feel that most companies really underestimate how important a solid design is. A good web based interface is my preference, but I also want to be able to muck around at a lower level if needed.
Today I purchased most of the hardware to build a box that would meet all of these goals. I decided to wait until after I return from my trip to Italy to purchase the hard drives. Tomorrow I'll get into the details of the hardware I picked out and the overall plan for how I'm gonna do this.
This King5 article talks about mercury contamination from the mining operation in the Silverton area years ago. I happened to be up there earlier this year and took pictures of the concentrators. As you can see, it's amazing that one of them is still standing.