Using Powershell for ‘one click’ build and deploy

One of the benefits of a startup is there is a very rapid code, test, deploy to production cycle. For a while at CollectedIt we had a manual (but documented!) process to deploy the code to production. This worked well for a while, but it started to get tedious. Plus as anybody who has done any number of production installs knows the more steps a human does, the more chances for an error.

It was time for an automated way to deploy to our code. First we looked into using something like TFS or Jenkins. These tools however required installation somewhere. CollectedIt is very lean so we prefer not installing excess services in production, spinning up a new server in the cloud, or investing in a physical server just for an automated build tool (to be clear we would spend the resources if we deemed it necessary). Next our thoughts turned to writing something homegrown.

CollectedIt runs on Windows servers in the cloud. I had been exploring PowerShell on and off for a little while and seemed like the perfect solution for a quick and easy homegrown deployment script.

PowerShell comes with a very powerful feature called Remoting which is a technology that lets you use PowerShell to remotely control one (or many!) remote computers. There were however 2 major obstacles that we needed to be overcome.

  1. Remoting has no out of the box way to copy files from server to server
  2. Remoting over the Internet is not the most straight forward of configurations

Not having a way out of the box to copy bits to a server with PowerShell is annoying. There are ways to copy bits over the remoting (such passing over a byte[] parameter when doing a remote call that has the contents of the file). However there was no way that was robust enough, or performed well enough for our tastes. We went ahead and configured an FTP server as a file server. Since PowerShell is built on top of .NET we can use FTP with Microsoft.NET. The code samples in the using FTP with Microsoft.NET blog entry are all in C#, but they translate to PowerShell fairly easily. Here is an example of using FTP over explicit SSL to upload a file.

$ftp = [System.Net.WebRequest]::Create($ftpuri)
$ftp.Method = [System.Net.WebRequestMethods+Ftp]::UploadFile
$ftp.Credentials = New-Object System.Net.NetworkCredential($username, $password)
$ftp.EnableSsl = $true
$ftp.ContentLength = $filebytes.Length
$s = $ftp.GetRequestStream()
$s.Write($filebytes, 0, $filebytes.Length)

Now that we have a way to copy bits we turned to the Remoting part. In order to use Remoting over the Internet first we had to enable Remoting on the server.

PS> Enable-PSRemoting -Force 

If we were on a secured domain we would have no more steps. Getting Remoting to work securely over the Internet however, we are just getting started.
  1. Create a certificate for ssl. We used the makecert command that comes with the windows SDK.
    PS> makecert.exe -r -pe -n "" ` 
    >> -eku -ss my -sr localmachine -sky exchange` 
    >> -sp "Microsoft RSA SChannel Cryptographic Provider" -sy 12 
    This places the certificate inside the "Local Computer\Personal" certificate store.
  2. Get the thumbprint for the certificate. 
    PS> ls Cert:\LocalMachine\My 
    Directory: Microsoft.PowerShell.Security\Certificate::LocalMachine\My
    Thumbprint             Subject 
    ----------             ------- 
  3. Now we needed to create an HTTPS endpoint. We can use the winrm command to help with that. One note of warning: winrm was made for use at regular old cmd.exe. Using it with PowerShell we end up with a lot of backticks. If you run into frustration using winrm with PowerShell just switch to cmd.exe (it's okay I won't tell). 
    PS> winrm create `
    `{Hostname=`"``"` ;CertificateThumbprint=`
  4. At this point the server is able to accept remoting connections over the internet over SSL. To disable the HTTP remoting listener it's as easy as finding the listener then removing it.
PS> ls WSMan:\localhost\Listener 

    WSManConfig: Microsoft.WSMan.Management\WSMan::localhost\Listener 

Type         Keys                          Name 
----         ----                          ---- 
Container    {Address=*, Transport=HTTP}   Listener_809701527 
Container    {Address=*, Transport=HTTPS}  Listener_1353925758 

PS> Remove-Item WSMan:\localhost\Listener\Listener_809701527

To use PowerShell Remoting over SSL there are additional parameters we needed to set when creating a remote session. The first is to tell PowerShell to use SSL and the second is to ignore the certificate authority since our certificate is self signed. This is as easy as

PS> $so = New-PSSessionOption -SkipCACheck # skip certificate authority check
PS> Enter-PSSession localhost -UseSSL -SessionOption $so # note the "UseSSL"

If there are any issues connecting first check firewall settings to allow port 5986 then check out this awesome blog post on Remote PSSession Over SSL finally if you still have issues use the about_Remote_Troubleshooting help page. With the two major hurdles solved we were confident that we could use PowerShell for our uses. Now we just needed to piece together code for

  1. Building of the project
  2. Zipping up the project
  3. Installing the project

We could have leveraged something like PSaketo do our dirty work however coming from a background of .NET/bash/batch it was actually easier to build up our script ourselves, this may change in the future. To build the project we just used MSBuild.

$msbuild = "C:\Windows\Microsoft.NET\Framework64\v4.0.30319\MSBuild.exe" 
if (![System.IO.File]::Exists($msbuild)) { 
    # fall back to 32 bit version if we didn't find the 64 bit version 
    $msbuild = "C:\Windows\Microsoft.NET\Framework\v4.0.30319\MSBuild.exe" 
$buildcmd = "$msbuild $SolutionFile /t:Rebuild /p:Configuration=$Config"
Invoke-Expression $buildcmd

Zipping up the project we chose to use the zip method that (finally!) comes with .NET 4.5 (note: this required us to use PowerShell v3), System.IO.Compression.ZipFile.CreateFromDirectoryin PowerShell it looks like this

[System.IO.Compression.ZipFile]::CreateFromDirectory($dir, $zip)

Installation of CollectedIt code was straight forward. From the beginning we created a setup.exe (that uses the excellent Insight Schema Installer from the Insight Micro-ORM project) to do an install of the SQL database that we could use on the command line (hence we could use it with PowerShell. The website only required the output from the build be copied to the production location. Again this was straight forward using PowerShell. We only had to Invoke a few commands remotely on the server to get this going. It looks something like this

Invoke-Command -SessionOption $so -ComputerName $Servers -Credential $remotecreds -ArgumentList ($Config) -ScriptBlock { 
    Param( $Config )

    [System.IO.Compression.ZipFile]::ExtractToDirectory("$", "c:\$Config") 
    Invoke-Expression "C:\$Config\Setup\setup.exe" 
    Copy-Item -Verbose -Recurse -Force "C:\$Config\Web" "D:\webroot" 

That's all the piece we had to put together for our 'one click' install. I should mention that our $remotecreds variable is populated with the Get-Credential cmdlet. For more of a streamlined process we are investigating securely storing the creds. Something along the lines of what is covered in this blog post on Importing and Exporting Credentials in PowerShell. Hope this helps you build a streamlined process for deploying your own code with PowerShell. Drop me a line with any comments or questions.


Comment your damn code

I'm just going to come right out and say it:

Comment your damn code.


Every now and then I run into an engineer--sometimes pretty high level--who thinks that you don't need to comment code. I'm going to call bullshit on this. I've been doing this a long time. Chances are, way longer than you. We are right in the middle of coding our asses off trying to launch something awesome, yet we still comment practically everything. There's no excuse not to. Every 3 to 7 lines of code you'll find some amount of editorializing. Maybe every few hundred lines you'll find a good joke too.



iPhone Cloud Based Architecture, Part 3: Getting Started with ClientServices

Now we're cooking with Crisco

Okay, the last couple of blogs were really just a setup to being able to integrate the iPhone with the services on a typical MVC3 website. In this post we're going to start getting into the hard part. Now keep in mind that there are plenty of different ways to create an architecture that syncs with the cloud. This is just how I choose to do it. It works pretty well for us, and now that the plumbing is done, I plan on rolling out a lot of new features. 

Where we're at in the stack

We've got our Foo service in an area, and GetThings() will return us stuff. So now we're going to implement the client-side services. Turtles, remember. 




Now in the diagram I'm referring to specific services that I've implemented for CollectedIt!. But in the example code, we're going to stick with Foo and GetThings (damn, I need to be more creative). Anyhoo. First things first: we need to set up the project. 


C# // Coding

Tale of Two Worker Items (.NET/C#)

Before .NET 4 the recommended way to create a new worker item thread was using the thread pool to queue up a work item.

ThreadPool.QueueUserWorkItem(new WaitCallback(DoWork)); 

private void DoWork(object state) 
    Console.WriteLine("Thread {0} doing work", 
    Thread.Sleep((new Random()).Next(5, 5000)); 
    Console.WriteLine("Thread {0} DONE work", 

There is nothing wrong with continuing to use the ThreadPool pattern to queue up worker thread. In fact using this pattern gets a performance boost you can see with just the enhancements Microsoft added to the ThreadPool in .NET 4. However .NET 4 also contains a whole set of features to facilitate parallel programming via the Tasks namespace. I would not recommend switching existing code to use the new features for the sake of using new feature. For the new code however it is worth taking a look to see if any of the new features would be worth using. Using tasks we could queue up worker threads in a similar matter.

new Task(DoWork, null).Start(); 

private void DoWork(object state) 
    Console.WriteLine("Thread {0} doing work", 
    Thread.Sleep((new Random()).Next(5, 5000)); 
    Console.WriteLine("Thread {0} DONE work", 



iPhone Cloud Based Architecture, Part 2: The Web API Services


If you haven't read Part 1, it might be a good idea to review that first to get an idea of what we're setting out to do. Join me today as I kick of the coding part of the series. (Yay code!) 

I like using real examples or pseudo real examples. Unfortunately my creative energy was all zapped out so that the best I could come up with is build a web site and app that is a TODO list. Unfortunately I'm just not as cool as jonwager where the Insight Library pours nice ice cold beer in code. So we're stuck just stuck with a TODO list. (TODO #1: come up with more exciting sample app.)

And let's also assume that we've got our data store, a generic services layer, and a nearly out-of-the-box MVC3 application. There are lots of blogs about various design patterns for those layers so let's say that's all ready. The rest of this article is going to focus on what else I've added to the MVC project to facilitate service calls that will eventually be made from the iPhone. We have the Minimum Viable Product (MVP), some users, and now we want to have a native mobile app. (TODO #2: build native iOS app.)

But Wait...

I'm skipping to what worked out eventually but I should point out that I started off far from there. A good question is why we're building these services this way versus just a mobile site....

My Approach #1: Make it all HTML/Server Side

The first approach that I implemented was based on what I learned about how products like PhoneGap work. If you're not familiar with these type of mobile platform products, they let you develop just about everything in plain javascript and html offline. It's pretty slick too because if you can code in jQuery and you know jQM then you should be able to create a pretty rich experience. The value add of PhoneGap is that they provide a way to package that on the iPhone (and Android, etc). You can do it a number of ways, but a common approach is to use jQM or Sencha to create your UI and use ajax to call your services to get json or html or both. 

It didn't take me long to get a decent understanding of what was going on. They're essentially creating a view that is a UIWebView and providing some hooks in there for getting access to some of the native functions. Hey, I'm a technologist, this makes sense to me. Maybe I should do this. And since I know exactly what I want it to do, I'm not going to need PhoneGap to make this work. I can just code up a specialized UIWebView and make things work exactly as I wanted. In fact I'll just have the server returned iOS optimized html and keep it super light weight. This will be E-A-S-Y! I could go so far as to bind to ShouldStartLoad to get even more sophisticated...  

An in fact this is exactly what I did for iteration one.



iPhone Cloud Based Architecture, Part 1


One of the things that we want to do at CollectedIt! is provide users with a frictionless way to share pictures of their collectibles. In fact the original incarnation of the product launch was a vision of Pinterest for collectibles with our roadmap including a marketplace. Marketplace is on the roadmap, but we're going to focus more on the Collector experience for now. After the initial web-only launch we knew that it was still a little bit of a hassle to upload your pictures so we really wanted a streamlined process. To do that we started with an iPhone app, hoping that we'll launch an Android app in the near future. As a start-up we had to pick where to focus our resources and right now it still makes sense to focus on the iPhone first, everything else second. 

This will be a series of three to five articles on how we built the iPhone app. I'm going to get fairly specific here and eventually we'll get to some code. This first article, however, will focus on the overall picture. In fact, here it is:





Remember in previous post I mentioned that each layer has a cost. In fact originally I didn't have the layer referred to as "Phone.Services". What we found is that we could dramatically improve performance by having an intermediate layer that would cache data on the phone. But enough with that, let's dive a little bit deeper. I'll start with each layer and give an overview and describe its purpose.


C# // Coding

Using C# dynamic objects to perform runtime text templating

One of the first large-ish projects I worked on here at CollectedIt was creating a notification system. This would notify users of actions that other users took with their collection (commented, agree/disagree with item stats, etc). While I hope to blog more about the all the technical challenges that came about developing the system today I am going to concentrate on some of the text templating we do with the notifications.

CollectedIt is architected in such a way that database calls are few and far between whenever the current action is on a code path that could be called from a user facing interface (website, iphone app, etc). This architecture minimizes lag time and leaves us in a good position to scale out. However, as with most forms of architecture, there are trade offs. The biggest trade off for notification system was that the front-end knew enough about the action that was being performed that should generate a notification, but knew virtually no details about the objects that were part of the notifications.

A more concrete example:
Arthur is browsing medieval collections and stumbles upon an item in Tim's "Enchanted Weapons" collection called "Staff that shoots flames". This item is really neat to Arthur so he wishes to give Tim a kudos. Once Arthur clicks the kudos button this triggers a notification. At notification generation time all that is known is:

  1. Logged in user id
  2. Current collection id
  3. Current item id
  4. A kudos was given

Nothing is known about Arthur, or Tim, or "Enchanted Weapons" or "Staff that shoots flames". It would be fairly trivial to the perform a DB query joining together 3 or 4 tables to get all the information needed, but we are in code that is executed as result of the kudos button being clicked so we want to get back to the user as soon as possible. Doing an extra DB query (particularly one that involved 4 tables) is not the quickest way to get that information.

What was decided was that the notification could be generated with tokens that could be replaced later down the line. The first thought was to just use Razor which would be cool however the Razor Parser is marked for internal use and I have been burnt by using internal methods before (not to say it's never appropriate to use undocumented methods...but that's another blog entry). Back to GoogleDuckDuckGo to see if anything is out there for me to do some sort of text templating with the CollectedIt object and some text.

I ran into T4 which at first look seem like it would work. Looking deeper though there is a compile time class that gets generated and the runtime just uses that generated object to do the processing. This won't really work since the template is also generated at runtime.

A little more time searching I came up with nothing that would really do what I wanted. So I decided to experiment a little writing my own. Since I wanted to write this quick and there really is no reason to write a full blown text processor (although that would be fun) I needed to boil down what exactly it was I was trying to accomplish.

  • Flexible text replacement
  • Not much logic necessarily needed inside the template itself
  • Both template and replacement objects would be generated at runtime

First thing I decided to do was take a look at what I could get with C#'s dynamic type. I have used dynamic objects in the past to do things like Object.RuntimeProperty but that's not exactly what I have here. I have Object and "RuntimeProperty" where "RuntimeProperty" is just a string. There may well be a way to use "RuntimeProperty" directly on a dynamic object, but I could not find one (if anybody knows of a way let me know in the comments). Instead I went down the reflection route since at runtime there is really nothing different between a dynamic object and a compiled object when inspecting objects with reflection.

Type dynamicType = o.GetType(); 
PropertyInfo p = dynamicType.GetProperty(property); 
object dynamicPropValue = p.GetValue(o, null); 
FieldInfo f = dynamicType.GetField(property); 
object dynamicFieldValue = f.GetValue(o);

Great! That takes care of runtime objects and their properties. What about the text template itself though. Well...I know regular expressions.

In order to not completely reinvent the wheel I picked the T4 syntax (and specifically only the subset of T4 that replaces the text template with a string: <#= Property #>). This is pretty easy to detect with a regex:


With the reflection and the regex it gives just us all the tools that are need to satisfy the requirements we came up with. All that's left is to package it up in a nice usable package. In order to figure out exactly how to package it up I looked at how exactly the text templating would be called.

Continuing with the Arthur/Tim example from above the code creating the kudos notification would like to generate the notification with an interface like

string notificationText = 
	"<#= Author > really likes your <#= Item #> in <#= Collection >";
string notification = template.ProcessTokens(new {
	Author = "Arthur", 
	Item = Staff that shoots flames",
	Collection = "Enchanted Weapons" 

This points to using an extension method. In fact that is exactly what we went with. The whole extension method is

public static string ProcessTokens(this string s, dynamic o)
	Type dynamicType = o.GetType();

	string composedString = s;
	MatchCollection tokens = _tokenRegex.Matches(s);
	foreach (Match token in tokens)
		string property = token.Groups["prop"].Value;
		PropertyInfo p = dynamicType.GetProperty(property);
		if (p != null)
			composedString = composedString.Replace(token.ToString(), String.Format("{0}", p.GetValue(o, null)));
			FieldInfo f = dynamicType.GetField(property);
			if (f != null)
				composedString = composedString.Replace(token.ToString(), String.Format("{0}", f.GetValue(o)));

	return composedString;
private static readonly Regex _tokenRegex = new Regex(@"<#=\s*(?<prop>[a-zA-Z]\w*)\s*#>");

That's how we solved the problem of having a disjointed read/write object system. Feel free to use the code snippets above in your own projects to solve any sort of problem where you need runtime text and runtime objects to generate a string. Also make sure to drop us a line with questions, suggestions, or just some kudos.


Technology doesn't matter in a tech start up!

If you're launching a new tech-based start-up here's a hint:

Your technology doesn't matter.



Seriously. It doesn't. However I bet that you're going to make a lot of technology related mistakes that are going to directly affect the likelihood of your success. In fact you may find that you're working against success.


Welcome to the first post on the CollectedTech blog. We're launching something new and pretty ambitious. We have a general CollectedIt! blog if you want to follow our development. This blog is going to be more technology-focused. We're currently  bootstrapping our product, so not everyone is yet full time, but we all have many years of direct experience with start-ups. I have been working at start-ups since graduating from Virginia Tech in the mid-90s. In fact, I met my wife at the very first start-up that I worked at in Rochester, NY. Since then I've relocated to the Philadelphia area where I've had my fair share of successes and also start-up failures. I've been a coder, VP of Engineering, Architect, Founder, Ops Support, QA, Janitor, Web Lacky, you name it. 

Future blog entrees will be much more technical, but I wanted to start off with a very practical topic. 

Why your technology it doesn't matter

Unless you're certain, I mean really certain like "OMG! I've already got 1,000 paying customers" certain, then you don't know what your product is going to look like when it's successful. You don't know what's going to work and what won't. If you think you do, you're wrong. Don't turn off your ambition or your optimism. You just have to understand that you're not going to be an overnight success and you're going to go through a lot of necessary product churn. Pinterest, Instagram, Rovio (Angry Birds) were not overnight successes. You won't be either. And that's a good thing.

A Tale of Two Start-ups

I've been part of both of these companies. I joined "Company A" during the first year and was something like employee #12. When I got there the technology was absolutely awful. When the CEO would demo the site he'd have do a little bit of misdirection by doing his salesy thing, clicking on a link when no one was looking, talk some more, and then bam! the site would load. The single page load times for just a single user was terrible. Multiple seconds. Unit tests? Forggetaboutit. Automated builds? LOL. We were brought in to help rebuild the site. We did that. Twice. Different technology stack each time. It was painful.

At "Company B" I was an actual, honest-to-goodness Founder (capital F). We started off right from the beginning. We implemented a very early form of MVC before MVC was widely available on the platform. We had unit tests. Lots of great unit tests. Setup was completed automated. We had an architecture that should support trillions of users. And we had enough money in the bank to last about 18 months which should be more than enough time to get traction. We built some of the cleanest code that you could imagine. Rock solid.

Years Later...

Now 12 years later Company A is still doing well and many of the early employees have had payouts to one degree or another. Company B was hit very hard by the financial crisis, never really recovered, and entered the deadpool. Why? Do I blame the financial crisis? Nope. It didn't help mind you, but we did something more fundamentally wrong.

Product first. Technology second.


We had it backwards. We had the seed crystal of a good idea, but focused far too much on building awesome technology to deliver that vision. Without any customer validation. Mistake. Bad mistake. And I personally had us focus too much on serving trillions of customers when we had almost zero. And unit tests? We were awesome. But it did not matter. And of course this was already very well published (which is part of why I kick myself now). So the part that I controlled--the technology--was working against our success. At the time I couldn't see that, but now I do. 

If you're launching a start-up or you're in a very early phase of a start-up it is far more important to adopt Exception Driven Development than Test Driven Development. You should attempt to ship as soon as humanly possible. In some cases that may mean that you should hack the crap out of your code to get it to ship. The less certain you are of your product, the more you should hack to ship. 

Bright Rule #1: As Certainty Decreases, Hacking should Increase

Obviously there is a balance here, and I am being hyperbolic. Software is all about tradeoffs, so here are the tradeoffs that you should make when at really early stages:

  • Don't worry about unit tests as much as good diagnostics. Even if (or probably especially if) you're bootstrapping, you have limited keystrokes. Use those keystrokes to build in an awesome diagnostic system at every layer so that when things do go wrong you know how to fix them. We'll talk more about this in the future, but leverage tools like the MVC mini-profiler and nLog or your platform's equivalent.  
  • Automation is still important. Anything that you can do to offload repeatable tasks is great. If you do something more than a few times, automate it. Personally I like using Jenkins to drive everything. 
  • Make good decisions about boundaries so that you can address implementation details later. For example, I will typically have a very simplified architecture to start: presentation layer, service layer, data layer By defining good service boundary if/when I do have to hack my code, I can always swap in a new implementation later. (This is how CollectedIt works.) Just don't over abstract! There's a solid reason why I only have 3 layers. Each extra layer costs something. 
  • Keep the ick compartmentalized. You will have ick. It's unavoidable. But keep it in plain sight and in groups so you can fix it later. this goes along with the above bullet point.
  • Keep it as simple as possible! We currently use Ninject for our Dependency Injection, but that is only because I'm already very familiar with it, and it keeps the service dependencies very clean. I don't use it for unit test reasons. 
  • Don't be afraid to have "fat controllers" if you're in an MVC model and you're testing things out. The homepage on CollectedIt changes regularly so I don't worry about have overly pure code. When we get an iteration that converts well for us, we'll productionalize it. 
  • You can't skimp on things like source control or setup automation. That will work for a week or two, but then it will work against you.
  • Most importantly: Pick the technology you know! There are a ton to choose from and if you follow Hacker News you may think you need to use the latest and greatest tech to win. No! When we had to pick a platform for iPhone development, we choose MonoTouch because that's what we know. Your technology choice does not matter. Yet.     


I'm not the first to blog this, and like I did before, you won't be the last person to ignore this advice :-) . Let me update my original statement:

Your technology doesn't matter. Yet.


Make sure you make it long enough so that your technology does matter. The way that you do that is keeping things simple. If you're a hit and you need to scale well then you'll probably be able to raise money to get the resources to help you scale or at least you'll have to time to circle back and fix the issues. That doesn't mean you should write spaghetti code if you can avoid it. But don't get hung up on using the latest and greatest because it ain't what's going to make you successful. Are you a python nut who digs mySQL. Great. Go with that and worry about using memSQL later. Think you need redis integration now? Probably not. Keep in the back of your head but get your product shipped first. When you have more than a 1,000 users and the site begins to show signs of wear, then optimize. Now is not the time to be clever, unless you're the The Doctor.

Being both technical and a founder is a powerful combination especially if you have the business sense and soft skills to pull it off. Don't spent all of your energy on the technical side because it doesn't matter. Yet.

About CollectedTech

Hi. We're building something new. We're ambitious. We're not afraid. It's not the first time that we've launched something completely new, but this time we thought we've opensource our thought process. We expect to make mistakes. Maybe you'll learn something from what we do right and from what we do wrong (we sure hope to). 

We're relatively technology agnostic, but are fluent with .net technology so we usually start there. Start with what you know. Technology itself rarely makes the different in a successful startup. We incorporate whatever we can to solve the problem. 

The Bloggers

James Bright

James Bright, techapreneur, is a 18 year veteran of the IT world, and has the battle scars to prove it. 

Christian Heinzmann

Christian Heinzmann, (Computer|Beer) Geek Dad - @sirchristian

Month List