Friday, 16 December 2011

Converting a Debian VM from Xenserver to vSphere

You can transfer the data for the machine from one environment to another using clonezilla. Once it is transferred I found that the machine does not boot. On further investigation I realised that there are many differences between a Debian install on Xenserver and on vSphere.

If you get the message "Operating system not found" you will need to install grub to the virtual hard disk. To do this boot a liveCD (such as clonezilla).

As root run grub, at the grub> prompt type:

grub> find /boot/grub/stage1

This will return the location of grubs stage one, make a note of this location for the next command. Type

grub> root (hd0,0) < this is the location returned by the last command

Now install grub to the master boot record (MBR) with:

grub> setup (hd0)

then quit grub with:

grub> quit

Reboot to get the grub menu but if you let it try and continue using the default settings it will very likely hang because it will try to use a block device name that begins with xvd, these are xenserver specific and will not exist on the VMware VM. To get it to continue booting you need to hit 'e' on the grub selection screen, then edit the boot line to replace the device that looks like /dev/xvda1 and made it look like /dev/sda1, also remove the "console=hvc0" otherwise you will not be able to see interact with your OS via the console (hvc0 is another xenserver specific device name).

Once we have booted the system we need to go about preparing the system to boot correctly when left to it's own devices (geddit?), firstly ensure the keyboard keymap is set up correctly with:

# dpkg-reconfigure console-data

Edit /etc/fstab to change all references to xvd devices to sda in /etc/fstab

On many of the machines I transferred using this method I found that they had no swap partitions after the transfer. So I simply set the disk size slightly bigger than the source disk when initially creating the VM in vSphere, then added a partition with fdisk. I usually set the swap partition as /dev/sda2.
You then need to make that partition a swap partition with (I have found often that a reboot is required before the command works):

# mkswap /dev/sda2

Then you can mount it with:

# swapon /dev/sda2

And see if the swap space is available with:

# free

Don't forget to update /etc/fstab so it is pointing to the correct device for the swap partition so it is automounted at boot.

The CDRom is set to /dev/hdc unless you have changed the VM's hardware settings, update this in /etc/fstab as well.

We need to change grubs configuration so that it automatically boots a with appropriate parameters, this information is stored in /boot/grub/menu.list. You can achieve this by either running the following commands:

sed -i 's/xvd/sd/g' /boot/grub/menu.lst
sed -i 's/console=hvc0//g' /boot/grub/menu.lst

Or edit  /boot/grub/menu.lst, find the line that begins # kopt, remove the console=hvc0 from the end and change the "xvd" device to the correct boot device. move down to the botton where the menu choices are configured and change the device and remove the console=hvc0 from each of the one you will likely use (the top one only in my case).

Edit /etc/inittab and comment out the line that begins "co:"

Remove the xenserver specific packages from your apt sources list with:

rm /etc/apt/sources.list.d/citrix.list

Update your packages list:

apt-get update

Remove all the kernels that are installed with:

apt-get remove linux-image*

You will get a warning asking about the removal of the running linux kernel, say "No" to this as we will install another one that we can get headers for in the next step.

Install a more appropriate kernel with:

apt-get install linux-image-2.6-686

Reboot to run the newly installed kernel.

Install VMware tools, details of how in my post here, then reboot to test.

To get the most out of the VM you should switch hardware to paravirutal drivers for SCSI controllers and NICs.


Thursday, 15 December 2011

Convert a Windows machine to a VMware VM with Clonezilla

I have had need to do manual conversions of physical and virtual machines for several reasons previously. Some of these reasons have included:
  • There is not enough free space on the machine's disk(s), (also know as "I haven't got time to wait for the machine's administrator to tidy up")
  • VMware Convert no longer supports the conversion of that operating system.
  • We can't afford PlateSpin
  • I wonder if I could do a conversion the hard way.
This takes me back to my first ESX Project. The company I worked for had about 8 test machines sitting along one office wall. The situation was already out of hand and I was being asked for more test machines. I decided we needed a to think big and decided on and got approved ESX on a couple of big (at least for us) servers to virtualise the test environment. I managed to simply move all the test machines by installing a fresh OS and installing and configuring all the software my self, with the help of the relevant departments. I then came to our Windows NT 4.0 software build machine.... No-one, not even the developer that put it together knew what was on there. We were building releases every couple of weeks at the time. It goes without saying this is not a good place to be, and baring a complete hardware failure on the physical machine there was no way anyone was going to be rebuilding the machine. I scratched my head and had previously used dd and tar to move installs from one machine to another on Linux, so I had a crack at it, and to my surprise a few hours later I had run a P2V by hand with a Linux live CD. I had even managed convince Windows NT to let me switch the drives from IDE to SCSI.

Having used Clonezilla to store a few images from machines that were shipped without restore media in the past, I wondered how easy it would be to back up an image of a machine and restore it into a new virtual environment. As it turns out with Clonezilla I don't even need to create an image, the tools are there to transfer the data from one machine's disk and write it directly to the disk of another.

The conversion here is from a Xenserver VM to a VMware VM but the technique used here will probably work else where. 
  • Put a Clonezilla live CD in both your (virtual) machines and boot them.
  • The default boot option for Clonezilla worked fine for me on both machines, your mileage may vary.
  • Select your language options as necessary
On the source machine:
  • Select "Start_Clonezilla"
  • Select "device-device"
  • Select "Beginner"
  • Select "disk_to_remote_disk"
  • Select "dhcp" or assign an IP address with "static"
  • Now it should show you a list of disks in the system, select the one you wish to transfer.
  • Select "Skip checking/repairing source file system" 
  • Clonezilla will now show you the command it is about to run, press "Enter."
  • You will be asked to confirm at a few important stages that you wish to proceed.
  • When you see "Waiting for the target machine to connect..." you have finished with the source machine for now, except take a note of the commands it is telling you run on the destination machine (See below image.)
On the destination machine:
  • Select "Enter_shell"
  • Select (2)
  • Type: sudo su - (to change to the root user)
  • Type: ocs-live-netcfg (and setup the networking)
  • Type: ocs-onthefly -s 192.168.1.1 -t sda  (replacing 192.168.1.1 and sda with the relevant IP address and device name)
  • You will again be asked to confirm at poignant stages.
You should see a progress bar which includes an estimate for the finish time.


When this finishes you should now be able to boot the destination VM and install the VMWare tools to make everything pretty and efficient. Don't forget to install the paravirtual SCSI and network card drivers if your OS is supported.


If you have success (or failure) with this migration method please leave a comment below, it certainly helped me out of a hole.

Monday, 5 December 2011

Virtual Center Template for Debian

When using a windows template, VMware's Virtual Center can help you customise the resulting virtual machine with a wizard, unfortunately it doesn't do this for Debian which happens to be my Linux distribution of choice.

To create the template I just installed a nice minimal installation onto a new VM, added the vmware tools, changed the VM to use paravirtual hardware then converted it to a template.

After each "Deploy from template" I simply run through the following:

It would be potential security risk to have all the VMs using the same SSH keys so I regenerate SSH keys for the VM with:

# rm /etc/ssh/ssh_host*


# dpkg-reconfigure openssh-server


Update packages

# apt-get update


# apt-get upgrade


Set the IP address (well it is likely to be a server) with:

# vi /etc/network/interfaces

Change host name as it isn't helpful to have duplicate host names due to the confusion it can cause me:

# vi /etc/hostname

Amend hosts file (to match the host name)

# vi /etc/hosts


Do a quick confidence reboot then go about installing the software onto the VM as usual to make it fit for the task you need it for.

Thursday, 1 December 2011

Paravirtualizing vSphere guests

Although just setting up a virtual machine on a ESX server works well there are ways to improve the VM's performance and reduce some of the CPU overhead required when the VM interacts with the virtual hardware. The two ways I have used are installing paravirtualized drivers for network interface and for the storage controller.

Windows
If you are installing Windows from media or an ISO image:
  • Edit the VM settings, replace any networks adapters with ones that are of the type "VMXNET 3."
  • Change the SCSI controller type to "Paravirtual" 
  • On the floppy drive choose "use existing floppy image in datastore:", click browse, then find the relevant image in vmimages\floppies for your version of windows. Dont forget to "select connect at startup"
  • Hit F6 during the initial installation to add the paravirtual SCSI drivers from the floppy image. 
  • The OS wont recognise the network card until the VMware tools are installed.
If the VM is already set up and running Windows with VMware tools the drivers for the paravirtualised SCSI adapter are also installed with VMware tools but you will need to add a second SCSI adapter of the type "Paravirtual" and boot before Windows will be happy to with the first (boot) SCSI adapter being set to "Paravirtial."
  • Edit the VM settings, replace any networks adapters with ones that are of the type "VMXNET 3"
  • Using the Vshpere client Edit the settings of your Windows VM, Click the "Add..." button, choose "Hard Disk", Choose "Create a new Virtual disk" and set a small size like 8 MB.
  • If your VM already has a SCSI controller you will need to add an additional one. To get another controller you will need to choose a virtual device node which is from the second SCSI controller. I usually just select SCSI(1:0).
  • Check device manager to ensure the the PVSCSI controller has been detected.
  • Shutdown the VM, remove the hard drive you have just added and change the remaining SCSI adapter to type "Paravirtual"
  • Start the machine, if all is well (you will see a blue screen if it isn't) you will have successfully changed windows to use a more efficient way to talk to it's boot drive.
Converting IDE drives to SCSI
I struggled with Windows XP box that I had imported using VMware Converter that only had an IDE drive. I tried deleting the hard drive (I was careful not to delete the underlying files), then I went to add the hard drive and set it to SCSI it would only allow IDE. I overcame this by shutting down the virtual machine, editing the relevant .vmdk file and changing the ddb.adapterType parameter to lsilogic like so:
ddb.adapterType = "lsilogic"
You should now be able to add the hard drive back in specifying the existing .vmdk file and set the SCSI adapter to "Paravirtual"

Debian 6
On a system that is already running and the VMware tools are installed. I ran all updates and it was simply a case of removing the existing network adapter and adding a new one of the type "VMXNET 3" and changing the SCSI adapter to "Paravirtual" and everything worked.

Update: Also check out this post on changing the IO scheduler on linux VMs squeezing more from your Linux VMs

Wednesday, 30 November 2011

VMware CPUID masks for AES-NI and PCLMULQDQ

Today I have been messing with with CPUID masks in ESXi/vSphere. Most people in my position wouldn't normally have to worry about this, but I work for a small company so the budget especially these days is necessarily lean so we chose go for vSphere 5 Essentials PLUS which does not include Enhanced vMotion Compatibility (EVC)

The problem I had was there were two servers that were brought at different times and as is often the case had slightly different specification based on the deals the vendors were offering at the time. Basically the Intel processors had slightly different capabilities Xeon E5530 and E5620 The problem came up when I tried to vMotion VMs from one box to the other, Virtual Center complained that the older server lacked the PCLMULQDQ and AES-NI CPU features:
Host CPU is incompatible with the virtual machine's requirments at CPUID level 0x1 register 'ecx'
So working my way through the KB articles on the VMware website I realised that we were not licensed for EVC so KB 1993 applied. It took me a while to get my head around what was going on but you need to tell the VM when it starts up to ignore the flags for those 2 CPU features so that it isn't using them, so it can happily be moved between host servers that do and don't support those CPU instructions. 

So down to the nitty gritty, how do you actually disable these features. Connect the vSphere client, to either your vCenter server or the ESX server itself. Right click the VM you wish to make more mobile and choose "Settings", then on the "Options" tab select CPUID Mask then click on the "Advanced..." tab. Scroll down to the "Level 1" section and for "ecx" the mask you need is (presented here for your copy and paste delight):

---- --0- ---- ---- ---- ---- ---- --0-

This was explained in the KB 1993 article but they refer to the "Level" as "a"  which is not mentioned in the ESXi 5.0/vSphere 5 configuration screens.
After I changed this setting all the VMs were happy to v|Motion back and forth between the servers.

Tuesday, 15 November 2011

Exchange 2007 certificate expired

Once a year Outlook starts to moan about the certificate on the Exchange server. This is because the certificate on the Exchange server was set to expire one year after creation.

To check weather this is actually your problem run the following command at an Exchange PowerShell prompt.

Get-ExchangeCertificate | list

This will show you a list of the certificates used by Exchange, the one we are interested in has IIS mentioned in it's list of services. The NotAfter field will tell you when the certificate expires. For me it shows a time about half an hour ago, so we need a new certificate. To create a new one run the following as an Exchange Server Administrator:

New-ExchangeCertificate

This command will ask if you want to overwrite the existing default SMTP certificate, answer yes to this. It should now display (along with other info) a thumbprint for the newly created certificate. Rather than have to re-type this, take a copy of it and use it to paste into the next command. We need to enable the newly created certificate for the IIS service, we do this with:

Enable-ExchangeCertificate -Thumbprint <thumbprint from previous command> -Service IIS

You can remove old certificates from the exchange certificate store with the following command:

Remove-ExchangeCertificate -Thumbprint <thumbprint of old certificate>

If you need a list of certificates details including their thumbprints re-run:

Get-ExchangeCertificate | list

After running the above I noticed that the certificate is now set to be valid for 5 years instead of 1 year. On further investigation it appears that Exchange 2007 SP2 changed the default for self signed certificates from 1 year to 5 years, woohoo!

Tuesday, 8 November 2011

Setting an “out of office” message for a user on Exchange 2007


To set an out of office message for someone else you will need give permission to the user you want to enable or modify the out of office message using the PowerShell extensions for Exchange At the power shell prompt issue the following command:


Add-MailboxPermission <user’s mailbox name> -AccessRights FullAccess -user <administrators account name>


Now you have issued the correct permissions to allow you to log into outlook web access (OWA) as the above administrator account and switch to the above user mailbox and set their out of office messages..


Using Internet explorer, go to https://<FQDN for your exchange server>/owa and login as the administrator's account you specified in the above PowerShell command.
Once you have logged in (may take a while if it hasn’t been used recently due to the way Exchange dominates a server) click the username with admin rights that you logged in with in the top right and type in the name of the user you want to set the out of office reply for. Once logged in as them, select options (top right), then select Out of office autoreply from the list on the left and complete as necessary.

Wednesday, 19 October 2011

Oddness with Windows XP Pro on a Domain functional level 2008 or above

I work as systems administrator for a small software house that produces products for a very specific sector.

We had a problem recently with our software not working on a Windows XP Professional (32-bit) machine that was in a domain with a domain functional level of 2008 R2. Our client software was producing an error about not being able to create the necessary objects when it was trying to talk to our server side software running on their DC. After several days of pulling hair it boiled down to there being a bug in Windows XP's kerberos.dll where it does not speak AES to the DC.

To see if you are getting the problem download and install wireshark on the workstation or the server and capture packets between the machines when you are getting the problem. look thru the capture to see if there is a "KRB5 - TGS-REQ" packet sent from WinXP to the server, with a "KRB Error: KRB5KDC_ERR_S_PRINCIPAL_UNKNOWN" response. Drill down into the "KRB5 - TGS-REQ" packet, TGS-REQ->KDC_REQ_BODY->Encrytion types if there is no AES encryption type then you probably have the same problem I did.

AES encryption for Kerberos comes into effect when the domain functional level is Windows 2008 or higher.

We solved the problem by applying the following hotfix (requires SP3) http://support.microsoft.com/kb/969442

I realise that this information is a bit specific as the problem only showed up when our software tried to invoke DCOM objects and that normal file sharing seemed to work without issue. Hopefully this may prevent someone else going through the pain I endured finding this fix.

Thursday, 13 October 2011

Squid authenticating against Active Directory on Debian Squeeze

Install Debian squeeze on you target (virtual) machine choosing only "Standard system utilities" during package selection.


I would normally logon and install ssh and vim to make life a little easier
# apt-get install ssh vim
Samba
we will need to install samba, winbind and kerberos client to get authentication working with Active Directory:
# apt-get install samba winbind krb5-user
If the Samba Server configuration asks you for a "Workgroup/Domain name" you can use short name that you specified for the domain when you created it but as we will replace this file shortly it doesn't actually matter what you put.

Now run re-configuration for krb5-config: 
# dpkg-reconfigure krb5-config
Default Kerberos version 5 realm: <FQDN for your AD domain in caps>
Add the kerberos server names for your domain: yes
Kerberos servers for your realm: <FQDN of your primary domain controller>
Administrative server for your kerberos realm:        <FQDN of your primary domain controller>


We can test that kerberos is working by running the kinit command, the format of the command is:
kinit <username>@<FQDN for your AD domain in caps> 
So for example, if our FQDN for the domain is test.local and we are using the administrator account we would type:
kinit administrator@TEST.LOCAL
If all is well it will ask for the password for the above account, if it accepts the password it will simply return you to the prompt, if there is something amiss it will report an error.


Move the /etc/samba/smb.conf file somewhere safe:
# mv /etc/samba/smb.conf /etc/samba/smb.old.conf
Create a new file containing the following:
netbios name = <this machine name>
workgroup = <shortname for you domain>
password server = <FQDN for your domain controller>
realm = <FQDN for your domain>
security = ads
winbind uid = 10000-20000
winbind gid = 10000-20000
winbind separator = +
winbind use default domain = yes

Restart the samba services:
/etc/init.d/samba restart ; /etc/init.d/winbind restart
Join the active directory with:
net ads join -U administrator
If you receive a "DNS update failed" error you should manually add this server to the DNS server for the domain.
Restart the samba services:
/etc/init.d/samba restart ;  /etc/init.d/winbind restart
Check the the following commands return with a success:
# wbinfo -t
Success of the above command confirms connection to the domain controller.

# wbinfo -u
Lists users (local and domain).
# wbinfo -g
Lists domain groups.

Squid
Install the squid packages:
# apt-get install squid
Copy the squid.conf somewhere safe:
# cp /etc/squid/squid.conf /etc/squid/squid.conf.old
Edit /etc/squid/squid.conf and uncomment (remove the hash from) the following line:
#http_access allow localnet
Assuming you are using a RFC1918 network range on your network this will allow you to use the proxy. Save the file and restart squid with:
/etc/init.d/squid restart
Log in to a domain joined windows box that is logged on as a domain user in the inetaccess group (or which ever group you chose earlier to add to the auth_param directives). Set the browser to use squid as it's proxy server (squid runs on port 3128) and see if you can get to the internet. If not, you will need to concentrate on getting squid working as a simple proxy before you try to add authentication into the mix.

You will need to choose or create a security group within your Active Directory domain that we will use for deciding which users can authenticate with squid. I have chosen the group named inetaccess (a single word for the name of the group so we don't have to deal with spaces in the group name) and added as members all the users I want to give internet access to this group.

Edit squid.conf with the following changes:

Add the following auth_param section:
auth_param ntlm program /usr/bin/ntlm_auth --helper-protocol=squid-2.5-ntlmssp --require-membership-of=<short name of the domain>+inetaccess
auth_param ntlm children 5
If you want basic (very insecure, but handy for fallback) authentication then you will also need the following auth_param section.
auth_param basic program /usr/bin/ntlm_auth --helper-protocol=squid-2.5-basic --require-membership-of=<short name of the domain>+inetaccess
auth_param basic children 3
auth_param basic realm Squid proxy-caching web server
auth_param basic credentialsttl 5 hours
To stop the leaking of your client IP addresses out of the proxy, set forwarded_for to off
forwarded_for off
Add the following after the existing acl entries:

acl users proxy_auth REQUIRED
Comment out (add a hash to the front of) the line we uncommented before:


http_access allow localnet


Add the following after the http_access, but BEFORE http_access deny all
http_access allow users
Restart squid 
/etc/init.d/squid restart
Using the same domain joined windows box that is logged on as a domain user that you confirmed things were working from earlier see if you still have access.


Trouble shooting
If you are not sure why squid is not letting you through, try changing the debug in the squid.conf to:
debug_options ALL,1 33,8
Try the proxy again and then look though the log file /var/log/squid/cache.log for clues as to what is going on.


Gotchas:

Windows machines running on the Vista or Windows 7 code base (so this includes Windows server 2008 & 2008 R2) by default have a local policy setting that prevents you from authenticating with squid using the older version of NTLM. If you go into the local security policy management console then "local Policies"->"Security Options"->"Network security: LAN Manager authentication level" and set it to "Send LM & NTLM - Use NTLMv2 session security if negotiated" then reboot.
You can set this by group policy if you have a lot of machines to change, also if you find that this setting is not taking effect then maybe group policy is changing it back.



Wednesday, 5 October 2011

Debian Squeeze samba domain member fileserver

I struggled to find information on how to set up a samba domain member server on Debian Squeeze. This is how I got it working.

NTP
Firstly we install and configure an NTP client. This is needed to keep Kerberos happy, if the time is out by more than 5 mins Kerberos refuses to work.
apt-get install ntp
If you machine can access internet time servers, then this is all you need to do, ntp will happily start connecting  to the Debian time server pool and sync your time.


I have my own time server set up so in /etc/ntp.conf I commented out the server directives for the Debian time server pools and added my own server with the following:
server time.mycompany.com
Restart the ntp daemon with :
/etc/init.d/ntp restart
You can check what ntp gets up to with:
tail -f /var/log/syslog
You will hopefully see something like the following:

Oct  1 10:34:12 myserver ntpd[3757]: ntpd 4.2.4p4@1.1520-o Sun Nov 22 16:14:34 UTC 2009 (1)
Oct  1 10:34:12 myserver ntpd[3758]: precision = 1.000 usec
Oct  1 10:34:12 myserver ntpd[3758]: Listening on interface #0 wildcard, 0.0.0.0#123 Disabled
Oct  1 10:34:12 myserver ntpd[3758]: Listening on interface #1 wildcard, ::#123 Disabled
Oct  1 10:34:12 myserver ntpd[3758]: Listening on interface #2 lo, ::1#123 Enabled
Oct  1 10:34:12 myserver ntpd[3758]: Listening on interface #3 eth0, fe80::250:56ff:fe84:1#123 Enabled
Oct  1 10:34:12 myserver ntpd[3758]: Listening on interface #4 lo, 127.0.0.1#123 Enabled
Oct  1 10:34:12 myserver ntpd[3758]: Listening on interface #5 eth0, 192.168.0.100#123 Enabled
Oct  1 10:34:12 myserver ntpd[3758]: kernel time sync status 0040
Oct  1 10:34:12 myserver ntpd[3758]: frequency initialized 0.000 PPM from /var/lib/ntp/ntp.drift
Oct  1 10:43:28 myserver ntpd[3758]: synchronized to 192.168.0.2, stratum 3
Oct  1 10:43:28 myserver ntpd[3758]: time reset +298.707500 s
Oct  1 10:43:28 myserver ntpd[3758]: kernel time sync status change 0001


Kerberos
Install the kerberos client
apt-get install krb5-user
Default Kerberos version 5 realm: <FQDN for your domain>
Kerberos servers for your realm: <FQDN for you dc>
Administrative Server for your Kerberos realm:  <FQDN for you dc>


Test that kerberos is working correctly by running the following command (caps are important):
kinit administrator@MYCOMPANY.COM
This will ask you for the password for the above account, which needs to exist on your Active Directory. If you input the password correctly you should be returned to the command prompt, otherwise you will see something like:
kinit(v5): Preauthentication failed while getting initial credentials
In which case you will need to check the contents of your krb5.conf file.

Samba
Install samba and winbind with:
apt-get install samba winbind
Don't worry what we put for the config of samba as we will replace the file anyway


Move the samba config file out of the way with:
mv /etc/samba/smb.conf /etc/samba/smb.conf.orig 
Create /etc/samba/smb.conf with the following contents, change the workgroup and realm to what your AD is set to.

[global]
        netbios name = sambafileserver
        workgroup = MYCOMPANY
        realm = MYCOMPANY.COM
        server string = Samba Domain Member
        smb ports = 445
        security = ADS
        encrypt passwords = yes
        winbind enum users = yes
        winbind enum groups = yes
        winbind use default domain = yes
        winbind nested groups = yes
        winbind separator = +
        idmap uid = 10000-20000
        idmap gid = 10000-20000

        client use spnego = yes
        client ntlmv2 auth = yes

[store]
        comment = file store
        path = /store
        read only = no
        valid users = MYCOMPANY+administrator

There are a couple of things to note about the smb.conf file, workgroup = the short name for your domain (like the domain part of a username when used like so: DOMAIN\username), realm is the FQDN of the domain (the part after the @ when you are using the username format like so: username@DOMAIN.LOCAL)


Create the /store path so that samba can access it to share it out:
mkdir /store 
chmod 777 /store
Edit /etc/nsswitch.conf and ensure that the passwd line looks like this:
passwd:         compat winbind
Restart the samba and winbind services:
/etc/init.d/samba restart ; /etc/init.d/winbind restart
We now need to join the computer to the active directory with:
net ads join -U administrator
As long as the join reports as successful you should be able to ignore any other failures.

Test that winbind can communicate with the domain:
wbinfo -t
If all is well the above command should return:
checking the trust secret for domain NA via RPC calls succeeded
Now jump onto a domain connected windows PC and see if you can create files in the share as the user mentioned in the "valid users" directive of the smb.conf.

Wednesday, 10 August 2011

Installing VMware tools on a Debian machine

Update: You may not need to do this, check this post.

Another one for the list of things I forget how to do regularly.


Make sure our packages are up to date.

# apt-get update

# apt-get upgrade

Install the packages needed to install the vmware tools:

# apt-get install build-essential psmisc linux-headers-$(uname -r)

In the vmware server console select "Install VMWare tools..." from the VM menu.
Then type the following:
# cd

# mount /media/cdrom

# tar zxvf /media/cdrom/VMware*.tar.gz

# cd vmware-tools-distrib

# ./vmware-install.pl --default

That should be it, you should now have working vmware tools on your Debian distribution.

Optionally you can delete the installation files with:
# cd ..

# rm -rf vmware-tools-distrib


Update: You should consider using more efficient virtual hardware and drivers, check out my post about using paravirtualized drivers

Friday, 8 July 2011

DNS forwarders

I shouldn't have to do this too often, and to be honest I can't remember what the reason was for the last time I set up forwarders with BIND, but I do remember that I did and that I am again looking up how to do it.

For a remote office I want to set up a caching only name server that forwards lookups onto my ISP's DNS servers, but I also want it to forward requests for the active directory domain onto my central site DNS servers.

Why is this a good idea? It means that the remote office machine use the ISP's DNS for the remote office's internet connection rather the DNS query going down the VPN to the central office's DNS servers, except when we need to. It also means that if for any reason the VPN between the offices goes down the remote office can continue accessing the internet.

Install BIND on Debian with:

# apt-get install bind9

Get BIND to use the ISP's DNS servers

Edit /etc/bind/named.conf.options and add the following line within the options section:

forwarders {1.1.1.1; 2.2.2.2; };

You will need to replace the fictitious IP addresses above with the ones that are provided by your ISP.

Get BIND to forward quires for your active directory domain to your internal DNS servers.

Edit /etc/bind/named.conf.local and add the following lines:

zone "my.domain.com" { type forward; forward only; forwarders {10.0.0.1;10.0.2.10;}; };

zone "0.0.10.in-addr.arpa" { type forward; forward only; forwarders {10.0.0.1;10.0.2.10;}; };

Change the IP addresses 10.0.0.1 and 10.0.2.10 to the IP addresses of the DNS servers that server your Active Directory domain.

And don't forget to reboot your Linux box to test that everything works as expected :D

Always reboot a linux box!

This is something I get caught out with from time to time.

You are not forced to reboot a Linux box as often as some other PC operating systems, for me this is good and bad.

It has been known for me to throw a machine together to solve a 'temporary' problem and not test that it returns to that same state after a reboot. It is all too easy to make the desired changes (iptables entries, NIC settings and routes) and not actually save them in init scripts. When your handy work silently turns from a 'temporary' solution into a permanent solution, and someone reboots the box... well you get the picture.

So if you can, always do a confidence reboot!

Thursday, 26 May 2011

Generate random passwords, passphrases or keys

One of the things I never remember how to do without looking it up is creating a (pseudo) random string of hex characters. Recently I had need to create a new hex WPA pre shared key for a wireless network I was setting up.
Running the following command on a Linux box did the trick:

dd if=/dev/urandom bs=1 count=32 2>/dev/null | xxd -ps

I know there is also a /dev/random device, so I looked up what the difference is. It seems /dev/random takes it's data from the kernel entropy pool, and if there is not enough data to serve you it will block waiting for more to become available. So If you replace /dev/random with /dev/urandom you may have to wait longer but your resulting key will be more random:

dd if=/dev/random bs=1 count=32 2>/dev/null | xxd -ps

The xxd command simply converts the output of the /dev/random device to hexadecimal output.

Sunday, 20 February 2011

Why I (mostly) use the Debian Linux distribution when a Linux box is needed

I have found reason to question why I use the Debian most of the time when the solution I am working dictates the use of the Linux OS. The answer is usually a question of footprint, or size of the installation.

When I first installed Linux for something justifiable I hi-jacked a desktop PC that was used by someone who had recently left the company.

The machine started off as a internet gateway/firewall using a 33.6Kps modem. It began evolving into a lumbering beast as I added a SMTP and POP3 mail server. Internal only DNS server, swiftly followed by a DHCP server. Then I added an internet proxy (Squid). Samba went on when I needed somewhere "out of the way" to put software network installation files. Added cron jobs to generate emails and net send messages to remind staff to perform tasks. Apache to serve a company wide home page that contained company relevant links (it would too much of a stretch to call this an intranet). I started writing bash scripts to report internet usage. Used PHP on the "intranet" page to provide daily updates on the progress of the companies software development and testing (the actual mechanics of this have been long forgotten, but I definitely used PHP).

In essence the thing became unmanageable. I realised this was the case the first time I came to upgrading the thing. The reasons for were twofold. Firstly there was simply too many services on the one machine for downtime not to become an issue. Secondly I hadn't documented any of of what I had done.

This was in the days before virtualisation and being a small company it was difficult enough to get my hands on the machine in the first place, it was very unlikely that management would fork out for further machines when I had been previously been able to make do with one.
So for the next few years I stumbled along taking the pain when it become appropriate or necessary to upgrade the OS on this box, but in the main the machine just ran and ran.

I saw virtualisation come along in the form of VMWare Workstation for NT (I used the beta, then bought a licence when it was released). Then came GSX server, then ESX server. What a great invention, a sandbox OS running on top of your desktop OS, if it crashes, you can just restart the application that was running the sandbox, what a great opportunity for software testers.

Then I noticed an open source project called called Xen. After some research I was able to get it running multiple virtual machines on a single box, with some of the functionality of of ESX. All of these VMs ran Debian and I was amazed at the small footprints of these installations.
This allowed me to dedicate VMs to particular tasks and not affect the other VMs when one of them needed upgrading. Now this may sound like old hat, but at the time this flexibility was amazing, not to mention the fact that the users got so used to systems with high up-times and very little noticeable scheduled downtime. I was also able to use good old bash scripts to create and snapshot VMs for upgrade testing.

I had kept an eye on XenSource which was the commercial side of the open source Xen hypervisor, but it seemed to cost too much to justify for our companies usage.

I dabbled with VMWare Server 1 & 2 when they came out as they were effectively a free version of GSX. I wrote some custom bash scripts to create LVM partitions for VMs which worked out well because we had to add local hard disks to the box as requirements grew.

Citrix brought XenSource and shortly after released a free version of XenServer (instead of being limited to small numbers of VMs).

XenServer was almost enough for me, but there was one thing missing, the ability to move a VM from one physical host to another, with the next version of Xenserver that is what we got.

So I asked management for a SAN and some new servers, this was refused due to funding elsewhere. So I sat and thought about what I could do on a shoes string and came up with re-purposing some old Desktop PCs that the development team had just had replaced. Buying in a cheap iSCSI disk array I was able to create a Xen cluster of 5 machines that more than covered all our ageing ESX, XEN and VMWare Server VMs. While moving these VMs to this new environment I was able to appreciate the effort involved in transferring large VM images around. Some of these VMs were Windows boxes and necessitated large images, others were CentOS installs bloated with large amounts of un-needed packages. The Debian VMs were small and although I ended up rebuilding the machines instead of moving them I settled on 2 templates for the Debian boxes, one with 2GB storage and one with 8GB storage. Most of my Debian VMs are 2GB and I have since gotten rid of all other flavours of Linux in preference of Debian.