Remote Server Administration

For a few weeks now I’ve been rebuilding my lab so I brush up on SCVMM and other cool technologies, InfiniBand anyone? With having a multiple lab servers for testing it becomes pretty hard to keep track of all my securely generated passwords and ports needed for web, SSH, Monitoring etc.

Enter RDM, or Remote Desktop Manager, to manage and keep track of a lot of what I would need to put in OneNote or, eek, notepad. RDM has a plugin for KeyPass so I can look up the login or secure notes without having to open RDM and have a simpler way to generate passwords.  It saves your hosts and other data entered on Devolutions server so you won’t loose yours settings if your computer crashes.

 

RDM in Action!
RDM in Action!

One of the best part of RDM, or Remote desktop Managers in general, is that I don’t need to have 15 RDP windows open to test out a few thing, It’s one application with tabs and if you need to you can popout one of them. For my lab I would NAT the RDP ports to the servers or desktops and every RDP window had the same name but different port. I would be away from the lab for a few weeks I would forget what server was what port.

I also like that they have a free edition that has some reduced functionally but when I realized how much more efficient I was I purchased it. Back when I started IT work I was against custom applications that only had one feature. When I would get a new computer I would have to bring over all of the applications and settings. For RDP management I would keep a folder with all of my labs add RDP shortcuts in the folders and make a toolbar out of the folder. It worked for a while since I would typically be on one server and connect though other tools from there but when there.

Before RDM
Before RDM

As you can see the folder option works but it cumbersome really quick and you cant keep quick notes on the servers. Overall RDM is easy to use and has a lot of features that I haven’t even tried yet.

What tools do you use for server management?

Lab Goals

When I get my parts back my plan is to set up my lab like the diagram below. The important technology and services that I’m going to be testing are:

  1. Hyper-V Live Migration
  2. Hyper-V Failover Clustering
  3. Exchange Clustering for High Availability (DAG and CAS) and Site Resilience
  4. Multisite Redundancy for Hyper-V and SharePoint
  5. iSCSI Boot
  6. Data Protection Manager BMR
  7. Terminal Server Remote App

It’s a big list, wish me luck :).

Home Lab Update

So I went with the Supermicro X9SCM-F-0. The only difference is that this it has SATA 3, or SATA 6GB/s, ports. It was about $30 more and worth it since I have the Crucial M4 SATA 3 SSD’s. Read – Fast with a hard drive that can support it :).

I’ll write up a new post about the parts and my planned setup next week but just a quick update. The first 8Gb ram DIMM’s I got didn’t work so I ordered the ones that Supermicro said that they supported. I plugged the new ones in and went to turn it on and the the computer didn’t POST.

After emailing Supermicro for about a day they finally had me call them. When I was on the phone the rep had me take the motherboard out of the case and see if it go the same POST codes. I don’t know if moving it out of the case would change anything but I took the motherboard out and sure enough the same POST error beeps sounded when I turned it on. When I finished reading the serial number to him he said “Ah, that’s why. The BIOS needs to be updated to support Ivy Bridge”.  He said that the motherboard was made one month before the new BIOS was released. Just my luck.

The good news is that I can ship it out to them to update the BIOS so it supports the processor. The bad news is that I have to ship them both out to have the BIOS support the processor. I hope that Supermicro can send me a shipping label since NewEgg said that it supported the new processors. If not I’ll might RMA them to NewEgg and hope for the best.

Here are some quick pictures of the parts and my test setup.

2012-08-30 21.09.442012-08-30 23.14.34

2012-08-30 23.25.55

Thought – New Home Testing Lab

I’m thinking of getting two new computers so I can test out clustering in my home lab. Below I have a list of parts that I’m going to buy in the next few months.  My goal is to be a little above $500 but since I want to test new virtualization technologies I might have to up my budget. Hopefully I can make it back with selling my first gen test lab components. 

CPU: Intel Xeon X3430 – Has VT-d and VT-x [$233]

image

Motherboard: SUPERMICRO MBD-X9SCL-F-O LGA 1155 – Has dual NIC’s and remote management [$179]

image

SSD: 128 Samsung 830 SSD – [$129] (Or 256GB [$274]

image

RAM:  Kingston 8GB (2 x 4GB) KVR13E9K2/8I – Cheap and is ECC [$63]

PSU: CORSAIR Builder Series CX430 – [$44]

Total: $711

DroboPro iSCSI IOmeter Test

I ran the IOMeter test from VMKtree against a DroboPro and a Raid 5 3Ware Ar2343 array as well as a LSI i8PRO with a RAID 0. I have four 1TB drives, WD and Seagate,  in the RAID 5 array and two 2Tb drives as well as three 1Tb drives and Two 750Gb drive in the Drobo. The LSI array has 3 10k drives in RAID 0.

The Drobo uses 76W at its max and 75W idle. I wasn’t able to get the Drobo to spin down after 15 min but when I manually put it in standby its uses 15W and waits for the iSCSI connection to reconnect (I have to manually reconnect it – I think I broke it when I changed the IP a few weeks ago). The server that has both of the arrays uses about 185W max and  around 166W when idle.

Controller: LSI SAS211-8i 
RAID: 0
Size: 835Gb

image

Controller: 3Ware 9650SE-4LPML
RAID: 5
Size: 2.73Tb

image

Controller: DroboPro 
Raid: Beyond Raid
Size: 7.34TB (16Tb logical)

image

In House Test Lab

My test lab , mainly for Hyper-V, consists of a Asus P5k WiFi/Premium motherboard, Q6600 quad core processor overclocked to 3.0Ghz, 8Gb ram, and a 5n3 IcyDock hot swap tray. I have two 300Gb VelociRaptors in RAID 0 running Windows Server 2008 R2 and two 1Tb Western Digital RE3’s in RAID 0 for the virtual machine store. I’m currently getting 230Mb/s read and 215Mb/s write on the VelociRaptors and 231Mb/s read and 214Mb/s write on the RE3’s. Before I updated to the RE3’s I was running a single 500Gb Western Digital Black hard drive as the host drive and a 250Gb Seagate drive for the virtual machine store with 4Gb of RAM.

The performance on the 500Gb and the 250Gb was ok for 2 virtual machines, VM’s, running at a time and was barley useable when you would run 3 virtual machines let alone a whole test domain. I ended up bringing a lot of my VM’s down to 512Mb ram so I could give the Exchange server 2Gb and the SharePoint server 1Gb. It was so bad I would have to wait minutes for all the servers come back from being paused or ,even worse , turning on the Exchange server. I ended up upgrading everything since I spent more time waiting than learning or testing.

I also built a second workstation/server so I could test out iSCSI and gigabit Ethernet. Something about using network storage as your local hard drive seems pretty cool. You can have a fast server hosting the VM’s and have a medium fast (translation – way cheaper) machine for storage. The box, Antec P180, consists of a P5Q Turbo motherboard with a E7300 dual core processor and 4Gb of ram and a 74GB VelociRaptor running Windows Storage Server 2008 with two 1Tb first generation RE3’s and a dual port PCI-e gigabit network card. It also has a 5n3 IcyDock tray with a 500Gb drive and a 160Gb drive with Windows 7 for random OS and software testing. Ill  write up another post about the IcyDock, soar they have been great. I moved twice and rearranged the two PC’s more times than I can remember and the docks still work.

A few of the VM’s I built are member servers, one is a domain controller, two SQL servers, a Exchange server, and a Data Protection Manager server for backup. Depending on a few things I might also build one or two Windows 7 machines so I can carry out some client side testing. Below you can see physical map of the lab.

 

Physical Map

CHI-HV1 V3 Virtual Build

I also purchased a few Cat6 cables from MonoPrice to test out iSCSI performance. I want to see if a switch slows down the transfer speed and if bridging the ports on the  gigabit cards will have any performance increase. All together I’m glad I upgraded the hard drives along with the ram. In the end virtual machine juggling isn’t fun.

I Found this post in my drafts last week. It looks to 10-12 months old. Ill work on getting a newer of my lab this month.