We have been a little quiet on the blogging front for quite a while recently, and this is mainly due to the fact we’ve been incredibly busy learning new stuff.
To be honest, I’ve been lazy writing for 2 reasons:
1. I don’t really like blogging about stuff I don’t know
2. I haven’t been bothered to find a suitable ‘code snippet’ plugin for my use with Live Writer which is my blogging tool of choice.
This post therefore is to point out that in order to use code snippets in my blogs I have decided to try out the Plugin Collection for Windows Live Writer (found here) which outputs code that looks like:
Which to me looks pretty good in the editor and I’ll be interested to see how it looks when I publish it, after which I intend to start blogging about my favourite new language features in ASP.NET MVC 4 and knockout.js.
If you are interested in embedding action links within HTML5 style buttons (client side) then you make use of the ‘onclick’ property found on the button.
My first mistake was excluding the ‘location.href’ part of the link expression, an example can be found below:
<button type=”button” onclick=”location.href=’@Url.Action(“Create", “Log”)’”/>
Ok, hands up who got a shiny new SSD (Solid State Drive) for Christmas this year? Right, well I did: a wonderful looking Corsair 120GB SATA3 affair with all of the trimmings, at least so I hope as I’ve been unable to try it just yet…So what’s the problem? Like many people nowadays I don’t tend to rely on a single machine to look after my data, and after some careful consideration decided to use Windows Home Server (2011).
Aha! I thought, I’ll backup my system to Home Server, then restore it onto my new SSD drive right? Wrong. Due to the nature of the cost of SSD drives these are substantially smaller in size than that of their cheaper slower counterparts. My PC OS was mounted on a 1TB hard disk but was using only 85GB of the available space but this doesn’t make your life any simpler, so don’t waste time at this point clearing out your 1TB to only use that of the available storage for your target device…
Next I thought, I know, I’ll use the disk partition application in Windows 7 to Shrink the volume. Wrong, it will reduce it down to a size where the closest unmovable file resides in my case rendering a disk of 800GB’s, still not sufficiently small. I have read a few articles that suggest that this approach is indeed possible but at this point I am downloading the Windows 7 installer image once again to complete rebuild my machine.
I have to admit, this is very disappointing, so if you come up with a better solution which doesn’t involve astrophysics then I’d love to hear it.
The first time I installed Hyper-V was the first time I’d really got into virtualisation of hardware or indeed software. This was actually quite ground-breaking for me as until this point feared change in all forms – not a great sign for a software company director. That said, I took this on the chin well when my co-director got me in touch with the real world and pointed out that most hardware on the whole sits around doing not a great deal in even large organisations. Indeed for us to arrive where we currently stand would have required 10 pieces of physical hardware eating electricity and money and not been that easy to manage at all. True, in our case we have put all of our eggs in one singular basket at this point, but hopefully we’ll mature to backups and safety in the next few weeks.
The great thing about virtualised hardware is that (in theory) it is easy to transfer these virtual machines into the physical space (or vice-versa) if you feel the need. I have written ‘in theory’ earlier simply because |’m yet to have done it, although in my early days I spectacularly failed to realise that you can’t simply copy and paste the virtual drive images from Hyper-V and expect them to simply work in a new hosting environment. Make sure that if you are in a situation where this is a requirement, you realise that you must export an image before you can re-import it into a new Hyper-V management system. I made this mistake myself and ended up rebuilding the whole network after blitzing the previous one.
Building a virtual machine (and indeed software) is definitely an art and not a science, if anyone tells you otherwise they have a bigger beard than you. And probably sandals. With socks. But if you have access to such a man or frightening woman) then use them and don’t do it yourself. It is a massive task, mainly due to the learning curve and the time it takes to download updates. When I have a chance and understand it myself, I’ll write about WSUS (which is the creation of a Windows Update Server on your network) to save your local download bandwidth. I write this paragraph as a warning to those unsuspecting budding enthusiasts that expect to get this for free. At the time of writing, my understanding is that a WSUS provides a singular point of download for Windows Updates so that you download them once and store them locally on your intranet. I don’t know how simple a task this is, as my initial read on this topic seemed unfathomably complex to get to grips with. More on this matter when I’ve done it for myself, my point is simply read about it now, and then disregard it in a slightly irritated and short-sighted fashion now as I did.
Make no mistake when you realise about Hyper-V (or any) virtualisation, you’ll need good hardware. A proper domain for a business (ignore these test-lab people) requires a LOT of memory at the very least. We are currently running with only 20 GB of RAM with just over 1.5 TB of disk storage. I’d say to you that this is the very minimum requirements for a true Domain network, but first you have to decided what your plans are. Before you get going, check what the minimum requirements are for each of the primary server roles are that you are planning to establish. If you have no idea what I’ve just said then you are not alone.
I’m only vaguely experienced in Windows Server 2008 R2 as we (Cogenity) have a tendency to only use cutting edge or newer technologies. Arguably something that has the date 2008 on it (R2 or otherwise) should not really qualify as ‘new’ given that I’m writing this post in late 2011 but this appears to be the Microsoft way. Regardless of which, it is only the small, inexperienced or the brand new that get to mess with latest technologies and I’ll voice this now that I’m fearful to the restrictions that a successful business would put upon us. Cogenity have always maintained that we’ll keep our bleeding edge mantra, and indeed we should as it has always proved an enjoyable mantra. For now, we’ll stick with it and try and stay focused.
Roles & Features I don’t believe are new to 2008 or even 2003 versions of Windows Server, but you will want to be sure about what your hopes are for your organisation before you chose one or the other. If you are a realistic modern company that has no legacy then embrace the new and install the very latest options that you can get your hands on but be prepared for the pain of learning with very few people that can help you in your plight. Those that have embraced new technologies know this pain, lets get back to the point.
Server Roles & Features are simply installed features of Windows Server operating systems. Even at its simplest a file server requires the role ‘File Server’ to enable the networking features, there are a couple of dozen other roles that the server can also provide, most notable are the DNS, DHCP, Active Directory and Web Server roles which seem to dominate the servers certainly on our network. If you are interested in rolling out something like UAG/DirectAccess (the two are slightly different but for all intents and purposes are the same from the standpoint of this blog) then you’ll also need to install Certificate Services on one of your domain machines.
As an aside with Certificate Services you should be aware that if you decide to install this on your Domain Controller then don’t forget that this is a once only, never to return from event as it locks down the Domain Controller’s name for security purposes. I’d recommend creating all of these servers as independent virtual machines first off, including the Domain Controller.
When I started this infrastructure build first I tried building a VM host machine using Windows 2008 R2 Standard Edition and installing the Hyper-V management software. I realised at a later date that this causes significant overheads as the host machine now has to have a larger amount of memory allocated to it for the operating system (I think it demanded something like 4GB) rather than using a dedicated operating system. Other (more die-hard) readers perhaps would consider using the Core only components of Windows Server 2008 R2, but wait there is another…
I accidentally downloaded and attempted to install Hyper-V Server 2010 thinking that this was a server component that needed to be installed on top of Windows Server 2008. I was wrong, this is actually a form of the Core installation for Windows 2008 with the necessary command line version of Hyper-V manager already installed. The only gotcha with this approach is that the operating system for Hyper-V is almost like a bare-metal installation and can severely limit your efficiency on that machine unless you know your command lines. However, I put to you here that your Hyper-V server should only be the Hyper-V server and nothing more, in fact I think you’d probably find it quite difficult to add much more to it anyway so perhaps this warning is unnecessary.
You’re next issue is managing the underlying virtual machines and you have to do this via the Hyper-V management console (if you’ve installed Hyper-V services before on the full edition of windows you’ll already be familiar with this). The first problem I hit when attempting to remotely administer a hyper-v server that wasn’t a member of the domain (remember that we said it couldn’t be?) was to work around all of the security issues that you get hit with. Firstly I had to ensure that the virtual server machine was on the same private LAN IP address set with matching subnet mask, this can be achieved relatively easily by the use of the networking command prompt options that you are presented with when Hyper-V management operating system starts up. I’d also suggest (almost demand) that you ensure your Hyper-V server has a static IP address to help you keep sane at later dates.
Your next issue is that you wont be able to refer to your hyper-v server by name by default as there is no DNS server running that is capable of telling you where it is. At this point I became slightly disappointed, but found myself disappearing into that magical (and somewhat oddly hidden) file called hosts under the dystem32\drivers\etc and adding an entry for my IP address to my host machine (unfortunately you’d obviously have to do on any machine that you want to remotely manage Hyper-V virtual machines from). The only way around this problem really is to have a completely independent piece of hardware to be your Domain Controller and join your virtual machine to that domain to avoid these issues. In my case I had one computer that had 20GB of RAM in it and no other choice. If you’re a small business that is looking to try this approach be prepared for a long-slog of learning, but that is part of the fun right?
Several months ago when I first completed our Domain infrastructure using the fly-by-the-seat-of-my-pants method, I was examining the best ways to allow external access to our intranet network and my colleague suggested something called DirectAccess. DirectAccess is part of Microsoft's Forefront Unified Access Gateway infrastructure and is so new that you have to use Windows Server 2008 R2 as the server infrastructure and Windows 7 Ultimate or Enterprise editions only. For most people and organisations this is a step too far and the ability to completely shift their infrastructure to support such new technologies is so far out of the range of cost effectiveness you might as well stop reading now. For us however, it provided an opportunity to learn some important new remote access technologies and spend a lot of time doing so – I don’t think any of us realised just how tricky this could be.
As I have mentioned in previously vaguely connected posts, when I first looked into DirectAccess I had very little (virtually no) experience of building networks and certainly no understanding of public facing corporate networks so the fear that came over me when I first read the requirements of DirectAccess was quite pronounced. I gave up several times before returning to this point as in most cases, all examples, all videos and all documents that I had read were all test examples only and didn’t ever hit the ‘real world’ as I wanted to.
It was a series of You Tube videos that really re-started the process for us, and unfortunately some of the realisations that hit us at that point meant re-building the Hyper-V server that we had already spent months painstakingly building before. Most of the earlier pain with this process had to do with my lack of understanding on Domains and how to interconnect Team Foundation Server through to SQL Server Reporting Services and SharePoint, but that is a tale for another time. As I write this post I’m currently sitting in Spain slightly away from the technology to have a rest from it but I’m feeling the need to download my brain into some form of post so that a) I don’t forget what I did, b) don’t forget who helped us get there and c) if we can help others spot those minor errors we picked up in the demo’s we watched.
Before you can even get started on this process, make sure you have the very bare minimum requirements, and by this I mean both with your ISP and with your hardware. If you require only a corporate LAN to run from your ISP then you’ll require 2 consecutive public IP addresses (I need to stress the public part of that previous statement – these are routable addresses, not private ones), if however like me, you require both a corporate LAN and your existing private home LAN you’ll require 3 IP addresses (of which at least 2 of those like before must be consecutive). To throw some additional fuel into the mix, for our current intranet we only have a single machine that is virtualised to the hilt to provide all of the services one would require, to date we have:
If you by-pass the amount of time it takes to even create these images in the first place and then ignore the fact that each of those images also require a download of several hundreds of MB of updates from the Windows Update Service then you can cut to the chase of actually thinking about DirectAccess by itself. I’m sure that it would make sense to invest time understanding the WSUS (Windows Update) services that are offered in the Server framework so that the actual download occurs only the once to you local machine and then the updates managed internally by the servers. Reading the documents on setting up such a server suggests that this will occupy a very large amount of storage to which we currently do not have available. Yet. I spent therefore a good couple of days downloading updates for our various machines thinking that this would solve some of our other issues, but as it turned out we simply had some fault ISO images downloaded from Microsoft.
For the purposes of DirectAccess, you only really need the Domain Controller, the Network Location Server and the UAG Forefront Server to configure and test your DirectAccess setup. I’d certainly recommend not spending anytime setting up the rest of your infrastructure until you have successfully tested all of the above.
If you are at all familiar with setting up networks using IPv4 you will probably make some fundamental errors immediately as you’ll presume that you’ll have to setup things like default gateways within your corporate intranet. If you do so, you’ll probably chose the DirectAccess machine as your default gateway as that seems to make some sense. This will be your first error, you must not set your DirectAccess machine as your default gateway from the point of view of your intranet, as this causes the DirectAccess machine to apparently ‘merge’ its network adapters (they appear as both connected to your intranet, even when they have public and private IP addresses). Somewhere deep in the bowels of the instruction manuals for UAG I’m sure it says this, but typically you find out about it when you start Googling things like ‘merged network adapters’ and ‘DirectAccess doesn’t work.’
My next big tip when setting up any form of Windows Server is don’t underestimate the size of windows, the updates or the platform that you are going to use. In terms of the DirectAccess server for example I setup a fixed disc of 16GB and have already filled it. Its not easy to simply add new storage to your virtual image that the operating system can natively use, you’ll find that its easy to expand the disk size, just not the partition that windows is installed to. I’d think that extending a virtual disk size would be easy, but apparently not. Even having made this mistake several times now, I still make it again.
For the purposes of our internal network (the corporate intranet) we settled on the address range 10.10.0.* with a subnet of 255.255.255.* which should supply a set of non-routable addresses behind a NAT. Thanks to the amazing service I have received from Eclipse adding a consecutive set of public IP addresses to my internet facing router was easy and I have to say I feel a little guilty as up to recently I was wondering what I was paying for in terms of service. At it turns out the service from Eclipse is amazing, I made my request at about midday and had the IP address range assigned to me by about 1pm. Although having only requested 3 IP addresses, due to the way maths works we ended up with 8. I hope to not waste this valuable resource in the future and expect to extend our intranet to include a DMZ based Web-Server in the future. Obviously my choice in the private address range was arbitrary and you could equally have chosen something like 192.168.2.* or similar, in this case that address range was already occupied by my own private intranet (a home based one, running home-server) which is a very different kettle of fish and I wanted to keep them very distinct from each other.
One of the other big things for me in this project was learning that you can use a single network card (I should say port, as some cards have more than a single port) to occupy more than 1 IP address. This seemingly impossible concept is performed in a dialog that you see all of the time in networking, but simply under a different tab. Network properties, IPv4 settings, second tab. As far as I can tell, you can add as many IP addresses as you want, but presumably there is little point in doing so unless it is dictated to you by the requirements of DirectAccess. Even as I write this post, with full knowledge that I have got DirectAccess working, I don’t know why you need more than a single public facing IP address – I’m sure someone can illuminate us on this matter.
I read in a post that the consecutive nature of the IP addresses needed to be alpha-numeric rather than numeric, but I haven’t trialled this particular caveat for myself. I gave us an address range of *.*.*.21 and 22 where the *’s are our public IP address numbers which I avoid publishing here for security reasons. If you’ve made it thus far in my ramblings, then I respect you, as chances are that you are like me and read the first couple of paragraphs thinking that you’ve already got the gist of what you need to do…
The network location server for the purposes of this blog relates only to its purpose within the scope of DirectAccess, and indeed this is the only time I’ve experienced the need to create one.My understanding of this server’s purpose is nothing more than a highly available resource that connecting DirectAccess clients attempt to connect to so that they can ascertain which side of the intranet they are on – i.e. connecting in from the corporate LAN or externally via the internet. When connecting via the internet, the client machine should not be able to access the website/resource therefore establishing that they need to use IPv6 DirectAccess magic tunnelling rather than standard IPv4 addressing.
I read the ‘highly available’ resource part, and put that to one-side. As far as I’m concerned everything on our LAN is highly available, and I’m not sure how to ensure otherwise.
In terms of setup, it is important before you build this server that you already have a domain server setup and that you have added the IIS/Web-Server role to your NLS server and joined it to your domain. You domain controller must also be running certificate services (or you have a dedicated machine that does) which has been configured as per the DirectAccess requirements.
The first time I tried to build my domain, I actually tried to create the domain controller using the core-only installation of Windows Server 2008 R2. If you’ve ever tried this, and you’re not sure what you are doing, then join the club. After reading pages and pages of command line prompts I gave up and rebuilt the machine using the full installation of server 2008. Again, another of my mistakes was to make the virtual machine host both the virtual machine manager and the domain controller, I consider this a mistake as the domain controller then becomes tightly coupled to the physical hardware on the machine rather than something that can be assigned virtual hardware at a later date. The ability to change how your hardware links to your underlying virtual machines is where you save the most of your time at a later date trying to understand how to perform connectivity issues such as UAG/DirectAccess.
The astute reader would have noticed in a previous post that I’d suggested creating the Domain Controller as a Virtual Machine (VM), this can leave you in a bit of a chicken and egg scenario. How do you host the Domain Controller on a virtual machine, surely you want your hosting server to have access to your domain? Well obviously your virtual machine host cannot be a member of the domain (at least I’d strongly recommend that it isn’t) as it would never be able to authenticate with the domain controller at start-up. The next mistake (I think) was with the naming of our domain and its link to DNS: I wanted to be able to call our domain [CompanyName].com so that when prompted to logon for a user the domain name would simply be the Company Name. At first this all appeared to work fine, but after a while I realised that we couldn’t access internet resources of the same name e.g. webmail.[CompanyName].com as the DNS server assumed these resources to be local on the subnet. I’m sure there are ways of using the Forward Lookup Zones in DNS to resolve this error, but for the most part all examples I saw used the term corp, or int as a form of prefix or suffix. In the end I opted for int.[CompanyName].com which I hoped would clear up the confusion, which it kind of did, but left us with a Windows 2000 domain name of INT rather than [CompanyName].
Personally I don’t think that INT or CORP are particularly good as domain names, but you have to remember that under the covers the full Domain name is int.[CompanyName].com which is acceptable in my eyes and certainly clear enough to the DNS to be able to access internet resources that we couldn’t access before.
DNS is something that until recently I regarded as a bit of magic in the realms of networking and to be fair, it is still quite magical. At the end of the day it is an extremely simple concept and acts as an address book for computers and resources within your network. DNS is a vital feature of a network of any size, and it allows you to abstract your IP addresses into more memorable names which is especially important when you start dealing with things like IPv6 addresses which frankly look like more like GUIDs (32 digit alphanumeric identifiers) than anything. Consider DNS as your global address book for your network, and usually, but not always this service is installed on the Domain server. In terms of failover and clustering I have no expertise so I’m sure someone out there can provide advice as to whether this is good practise or not. I certainly understand that it is good practise to have a backup Domain controller, but for us that simply isn’t practical as we only have one piece of physical hardware running everything...
If/when you start working with technologies such as DirectAccess, the ability for the DNS server to keep tabs on both your IPv4 and IPv6 addresses for the same machine becomes vital. One of the demo’s that I watched recently on You Tube showed an IPv4 address being replaced by an IPv6 address for a machine that is connected via DirectAccess and this is a sign of things working. For whatever reason I connect DHCP with DNS, and I believe that they are kind of entangled as services go.
Group Policy, Certificates and Active Directory all sound like very fancy terms, but for me I find the terms make the services they offer sound complex and slightly off-putting. The way I like to think of Group Policy is simply a series of rules, settings and/or requirements that you enforce on various components of your network. For the purposes of this particular blog entry we want to enforce each of our computers to enrol in our certificate service ensuring that they are authenticated with the certificate service on our central certificate server (in my case this is my Domain Controller, but I’m still not entirely sure whether this is good practise or not).
Via Group Policy you can enforce password requirements, disk quotas, maximum number of concurrent users, permitted access hours, whether you can VPN connect and a huge number of other options within your network but at its simplest its just a way of reducing repetition in your setup. You don’t (for example) want to have to manually setup each computer in your Domain with their own authentication certificates, you’d rather that they were given this knowledge when they joined your Domain – this can save you a lot of time particularly when you are dealing with hundreds, perhaps thousands of machines.
I have recently been playing with using Coded UI Tests in order to automate a set of UAT tests for Stockroom. After installing Feature Pack 2 and adding the conditionally compiled assembly everything seemed fine; I was able to start Stockroom and click around with the clicks being recorded. However, trying to click on an item in a styled and templated ComboBox resulted in the error:
“Last Action was not recorded because the control with the Name ‘<model name'>’ and ControlType ‘ListItem’ does not have any good identification property.”
Confused, I returned to the MSDN and found this article. I therefore set about adding bound AutomationProperties to the data template used by the ComboBox then restarted recording my tests only to find exactly the same error. Back to the documentation and right below the “Using a Data Template” was this little gem:
“For both of these examples, you must then override the ToString() method of ItemSource, as shown using the following code. This code makes sure that the AutomationProperties.Name value is set and is unique, because you cannot set a unique automation property for each data bound list item using binding”
Modify my data models in order to run a UI test? No thanks!
So, while the Xaml provided in the above documentation didn’t work, it did illustrate that the automation properties needed to be set on the container for the underlying item. The obvious way to do this was to subclass the ComboBox and override the GetContainerForItemOverride method in order to set the appropriate automation properties but, while this approach would definitely work, it would require significant refactoring of code and (depending on the implementation) a large number of ComboBox derived controls. Instead I decided to write a Behavior that could be used within an data template and which could set the appropriate automation property values on the container at run time.
The AutomationPropertyBindingBehavior is added to the root FrameworkElement of the DataTemplate and has two DependencyProperties:
When the behavior is attached to the FrameworkElement it registers for callback when the element has been fully loaded.
Within this call back, the behavior first unregisters the callback then calls SetAutomationValues.
SetAutomationValues endeavours to locate the ItemsControl and Container for the item. If successful, AutomationProperties are set to the data-bound value of the Value DependencyProperty.
Within a DataTemplate it is used like this:
Et voilà, clicking on an item in the templated ComboBox now results in coded UI test recording the correct message:
“Click ‘<model name>’ list item”
Victory for the forces of democratic freedom!
The AutomationPropertyBindingBehavior class can be downloaded from here:
Now that Silverlight 5 supports Style Setter Data Binding, this Behavior will not be necessary as you will be able to set the binding on the ItemContainerStyle like this:
When working on a software product that involves a database often it is difficult to use the same connection string as other developers simply as one developer tends to call his/her instance and/or database name different things to each other (and occasionally people can’t use the localhost shorthand). Often these connection strings are then baked into some app.config or test configuration file somewhere and you end up in all sorts of trouble each time you check in the code to source control. Most notable occasions involve the Entity Framework but there are bound to be many others.
One way of improving or at least making this problem transparent is to use the SQL Server Configuration Manager and set up an Alias to the server in question. When presented with the Configuration Manager, expand the SQL Server root node and head down to the SQL Native Client Configuration (in my case this is version 10.0). Expanding this node will show Client Protocols and Aliases.
Simply give the alias a name, chose your protocol and setup which server you are referring to with the alias's connection and after you have done this all developers can simply refer to the database by its alias name regardless of where the server lives from their perspective. Unfortunately aliasing doesn’t change the name of the database you are connecting to so its still important to chose a common name in this case.
If you are a fanatic of the perfect score when it comes to Code Coverage in Visual Studio, then you have to tread carefully when you read this post. There are of course ways to exclude classes and methods to assist you in that desired 100% statistic. As long as you ask yourself whether you are doing this for the good of the code rather than the good of the statistics then I have no problem in admitting the secret that is readily available in the MSDN. The exclusion uses C# attributes (in this case [ExcludeFromCodeCoverage]) to decorate the offending material within your codebase and informs the Code-Coverage engine to ignore the attributed items when collecting its statistics. The reason I looked this up was due to a static class containing nothing but static strings which arguably could be tested, but I’m not sure it adds anything to the quality of the code by doing so.