WHAT'S NEW?
Loading...

Configure a new project in TeamCity


Index

  • Intro
  • Index and previous steps
  • Create your new Project in TeamCity
  • Add some configuration to your project

Intro


When someone wants to set up an automatic deployment environment using "Octopus Deploy" for example, previously, he needs to configure TeamCity in order to build the solution.

In this "how-to" guide I'll try to teach you which are the steps to prepare a proper build process in TeamCity.


Index and previous steps

First of all you would need to install the OctoPack nugget package in your .Net solution in order to be detected by TeamCity.

Then push the changes to your repository and now we’ll try to link that repository to your TeamCity solution.
Configuration to build the solution.

Create your new Project in TeamCity

First go to Administration in TeamCity where we'll add our new Project.




Create a new Project within the "Project" option.





Give it a name and paste the URL of your repository. In this case is a Bitbucket repository.





Add some configuration to your project


Once our project is created and linked to the repository, it's time to tune the build process.

Go to the projects list and select our build process by clicking on its name.





In the following window select "Edit Configuration Settings"




In General Settings add a Build number format which, effectively, will be the folder where TeamCity deploy the solution




Within "Version Control Settings" go and edit your VCS configuration.





Add the following text within the "Branch specification":
+:refs/heads/*
+:refs/heads/(feature/*)
+:refs/heads/(release/*)
+:refs/heads/(hotfix/*)
+:refs/(tags/*)
+:refs/pull/(*/merge)
+:refs/pull-requests/(*/merge-clean)


This will make TeamCity run a build process no matter where branch you push to the repository.





Go to the "Build Steps" option where we'll add two new steps:
  • Nuget Installer
  • Visual Studio


For the Nuget Packages copy the following conf.
if you can't see all these options click on "See advanced configuration" at the bottom of the page


Remember to add the name of your solution file within the "Path To Solution File" textbox (this parameter could change from one solution to other).





For the Visual Studio build step do the same as the previous step.





Final step, add a trigger using the default configuration.




TeamCity how to clean up old artifacts

Index



  • Intro
  • Problem!
  • Solution


Intro

I've been playing for a while with TeamCity (JetBrains). It is a great tool to manage your builds and put in place continuous integration in your team. But that's not the best part, because the guys from JetBrains don't charge you anything for the professional license which includes all of this:

  • Professional Server License
    • 20 build configurations
    • full access to all product features
    • support via forum and issue tracker
    • 3 build agents included, buy more as necessary


Problem!

It works so good, that you probably forget that it's there. It only happened to me when the disk of the server was getting full. This is produced by the artifacts. The artifacts are the nuget packages we create in our application. They live in our code repository and TeamCity uses them to know what are the results we need to ship.

If you didn't set up a proper cleaning process in your TeamCity box I will show you how.

Solution

First of all we need to login on our instance of TeamCity and the list of projects will be displayed. Select one of the projects and it will take us to another windows where we need to click on the option called: "Edit Project Settings"

On the left menu, click on the option "Clean-up rules". We'll be prompted to a view with our projects, the templates used on those projects and a third column with the clean up setup. Click on "Edit".


In the following window we just need to click on "Custom policy" within the "Clean everything including artifacts, history and statistical data". Then, define the amount of days or the number of successful builds we want to keep as displayed here below:


Finally, we can schedule this task on a daily basis and test it in live as displayed in the following picture within the Administration menu in TeamCity:



That's all, I hope you enjoy the lessons and I'm looking forward to hear your comments.

Day 16 Programming in C# 70-483

Index


  • Introduction
  • Databases
  • XML
  • JSON
  • Web services
  • References

Introduction

This post is part of a series of post to help you prepare the MCSD certification, particularly the certification exam 70-483, based on the book:


You will find all the code available in the following GitHub repository. Lets talk a little bit about threads and how they work using a .Net development environment.

Databases


Applications need a persist storage system to save data, this can be done in a database using ADO.NET or the Entity Framework. You can also use a web service and retrieve a response in JavaScript Object Notation (JSON) or Extensible Markup Language (XML). All the .Net functionalities to work with databases is stored within the System.Data namespace and it consists on two different approaches: 
  • connected: by running queries by using Structured Query Language (SQL) to create, read, update, and delete data (known as CRUD operations)
  • or disconnected data: you'll use DataSets and DataTables to mimic the database structures. Any change made can be sent back to the data store by using a DataReader.
Providers:
  1. Microsoft
  2. SQL
  3. Oracle
  4. MySQL
Connecting: we always need to define some connection (DbConnection class) details using a connectionString and providing the database type, the location and the credentials to log in. Connections are used within a using statement, IDisposable is implemented. Typically this strings are stored in a config file (app.config or web.config) but you can build your dynamically by using the DbConnectionStringBuilder class: OracleConnectionStringBuilder, SQLConnectionStringBuilder... Hard-code this is a bad practice, as mentioned before you can save them in a config file and retrieve them by calling the ConfigurationManager class like here:


Connecting is a time consuming operation and leave a connection open can prevent other users to access the storage, instead you can use the connection pool to save some resources and time.

Selection: when running a particular query against a database you can use the SqlCommand class which returns a SqlDataReader which keeps track of where you are in the result set. Asyn/Await is supported as well. SqlDataReader is a forward-only stream. You can't go back while you're reading, you can access the columns by index and by name by calling:

  • GetInt32(int index)
  • GetGuid(int index)
  • GetString(int index)
You can even batch multiple operations together and in result the SqlDataReader will return multiple result sets. Then you can move over them by calling NextResult() or NextResultAsync().

Update: here when you change something in a database you don't get a set with the information affected by your query, instead you end up with an integer which represents the amount of elements that have been modified. Run a ExecuteNonQuery() or ExecuteNonQueryAsync() in a command.

Parameters: typically you use parameters in your queries when filtering your selections or when updating data. Never hook your user interface components with you queries as this is a potential SQL injection attack candidate. Instead, use parameterized SQL which produces a more generic query and is easier to precompile an execution plan producing a more secure and better performance result.

Transactions: (ACID) key properties:
  1. Atomicity: if one fails, they all fail (rollback).
  2. Consistency: from one valid state to another.
  3. Isolation: Multiple concurrent transactions won't influence each other.
  4. Durability: committed transactions result is always stored permanently.
If nothing goes wrong you call TransactionScope.Complete() within a using statement. Transactions can be created using three options:

  1. Required: Join the ambient transaction or create a new one if it doesn't exist.
  2. RequiresNew: Start a new transaction.
  3. Suppress: Don't take part in any transaction.
The .Net framework manages transactions for you. If the transaction uses nested connections, multiple databases or multiple resources it will be promoted to a distributed transaction (avoid if possible).

ORM (Object Relational Mapper): you can manually write your SQL statements but if your app grows it end up being a nightmare to maintain or improve. Here is when ORMs like Entity Framework come handy generating for you all those queries. 

It provides three different approaches:
  1. Database First
  2. Model First: typically use a graphical tool
  3. Code First: you need to define all in code, your entities, relations and so... inherit 
    1. DbContext is used to create your own context, which is the interface between your code and the database. 
    2. By adding entities to the context you'll end up creating new rows and to commit that calling the SaveChanges() method. 
    3. Conventions are applied like the Id property is a primary key and more...

XML

It's a document formatted to be readable both by humans and computers. First line is an optional line which is called prolog and tells you are reading to an xml file and the enconding used, it typically looks like this:

<?xml version="1.0" encoding="UTF-8" ?>

Now we'll take a look at some .Net classes to help us work with this type of files:

  • XmlReader: read xml files in hierarchical manner and only forward without cached
    • Create(): static. Param XmlReaderSettings to configure how to read and skip some data.
  • XmlWriter: write xml files only forward without cached.
    • Create(): static. Param XmlWritterSettings.
  • XmlDocument: navigate and edit for smaller documents as it's slower than XmlReader/Writer. After editing you can save. It uses XmlNode to move through your document and perform changes of attributes in nodes. A nifty way to navigate is by using XPath (query language for xml). XPathNvigator class offers an easy way to navigate through an XML document.
  • XPathNavigator: navigating through an XML
See an example here of how XML looks like:

<employees>
    <employee>
        <firstName>John</firstName> <lastName>Doe</lastName>
    </employee>
    <employee>
        <firstName>Anna</firstName> <lastName>Smith</lastName>
    </employee>
    <employee>
        <firstName>Peter</firstName> <lastName>Jones</lastName>
    </employee>
</employees>

JSON

Lighter version of XML is JSON or JavaScript Object Notation, this contains less rules and therefor is why is light. The .Net library Newtonsoft.Json available at http://json.codeplex.com/ offers some functionalities to work with this format. Typically is used with asynchronous calls between web sites and servers or alson known as AJAX (Asynchronous JavaScript and XML, in reality XML was replaced by JSON so AJAJ is more accurate).

See here below the previous XML example this time using JSON

{"employees":[
    {"firstName":"John""lastName":"Doe"},
    {"firstName":"Anna""lastName":"Smith"},
    {"firstName":"Peter""lastName":"Jones"}
]}


Web services

Web services and micro web services turned in loosely coupled solutions for those who want to integrate to splitted systems. .Net offers the Windows Communication Framework (WCF) to build you own services. You will build a class with all the logic you service offers and it will be decorated with the attribute: ServiceContract close to the class name. Bear in mind you have to add System.ServiceModel reference into your project. For each of the methods exposed you will use the OperationContract decorator. Example:



Every WCF service has the following ABC properties:

  • Address: defines the endpoint. This is the URL you have to tackle to open the connection.
  • Bindings: it configures the protocols and transports that can be used to call your service: HTTP, HTTPS, named-pipe connections...
  • Contract: this defines which are the operations your web service exposes.
If the web service is already created, VS offers you a handy way to add a reference into your project which will help you to map all the methods exposed. Behind the scenes, the proxy uses the configuration file (app.config, web.config) to get the ABC settings.

Day 15 Programming in C# 70-483

Index


  • Introduction
  • Files directories and drives
  • Streams
  • Network
  • Async IO streams
  • References

Introduction

This post is part of a series of post to help you prepare the MCSD certification, particularly the certification exam 70-483, based on the book:


You will find all the code available in the following GitHub repository. Lets talk a little bit about threads and how they work using a .Net development environment.

Files directories and drives


Working with the System.IO namespace you'll find a lot of classes related with files, paths, drives, folders. See the main classes in this list:
  1. File
    1. Exists(string path)
    2. Delete(string path)
    3. Move(string source, string destination)
    4. Copy(path, destPath)
  2. FileInfo
    1. Exists: property
    2. Delete()
    3. MoveTo(string destination)
    4. CopyTo(destPath)
  3. Path static class within the System.IO namespace
    1. Combine(folder, fileName): returns a string with the full path for that file.
    2. GetDirectoryName(path)
    3. GetExtension(path)
    4. GetFileName(path)
    5. GetPathRoot(path)
  4. DriveInfo: it enumerates the current drives and display some information about them
    1. GetDrives()
    2. Name / DriveType / VolumeLabel / DriveFormat
  5. Directory
    1. CreateDirectory()
    2. GetFiles(string path)
  6. DirectoryInfo (multiple operations against a folder)
    1. Create()
    2. GetDirectories(): searchPattern can be passed as a parameter to reduce the amount of directories retrieved.
    3. EnumerateDirectories(): with GetDirectories you get a list and you need to wait until is completely retrieved but with this you can start enumerating the directories before been completly retrieved.
    4. MoveTo()
    5. Move()
    6. GetFiles();
  7. SearchOptoin: specify a search pattern to be performed agains a Directory/DirectoryInfo object
    1. wildcards: * stands for any group of characters, ? stands for any single character.
  8. DirectorySecurity: grant access to folders, hosted in System.Security.AccessControl namespace.

Insufficient privileges creating a folder can end up throwing a UnathorizedAccessException and trying to remove a folder that doesn't exists in a DirectoryNotFoundException.

Streams


Streams refers to abstractions of a sequence of bytes, which means, if you want to save text first you need to transform that text into an array of bytes. Types of streams:
  • Reading
  • Writing: when writing to a FileStream
    • Write(byte array)
  • Seeking: stands for some streams have the concept of a current position.
To convert between bytes and strings we use the System.Text.Enconding class, it uses different standards to make the transformation like:
  • UTF-8
  • ASCII
  • BigEndianUnicode
  • Unicode
  • UTF32
  • UTF7
To follow this process in an easier way, the File class supports a CreateText() method which returns a StreamWriter object that we can use to perform write operations directly to a file. To read the data you can use a FileStream object, you would need to read every byte one by one by using the ReadByte() method and finally, translate those bytes using a particular encoding like UTF8 calling the method: Encoding.UTF8.GetString(data) which returns a string. The alternative, if you know you are reading a piece of text, is the StreamReader and call the method ReadLine(). See example below:

using System.IO; using System.Text; namespace Chapter4 { public static class CompareFileStreamReader { public static void HowToReadAFile() { string path = @"c:\temp\test.txt"; // read a file using a FileStream object using (FileStream fileStream = File.OpenRead(path)) { byte[] data = new byte[fileStream.Length]; for (int index = 0; index < fileStream.Length; index++) { data[index] = (byte)fileStream.ReadByte(); } System.Console.WriteLine(Encoding.UTF8.GetString(data)); } // read a file using a StreamReader object using(StreamReader reader = File.OpenText(path)) { System.Console.WriteLine(reader.ReadLine()); } } } }

Working with multiple streams together (decorator pattern) can be handy sometimes, specially when compressing/decompressing streams (FileStream, MemoryStream,...) using the GZipStream class within the System.IO.Compression namespace.

Another interesting class is BufferedStream, which comes handy when you need to read/write big chunks of data (which is what drives are optimized for, instead of reading byte by byte). It wraps a FileStream object and works out if it's possible to read larger chunks of data at once.

To finish this chapter about streams, I want to highlight how files can be locked by the OS and you might think the file was deleted. For example a File.Exists call can return false and we assume there's no such a file, while in reality, the file is in use by another thread. This is important to consider when writing/reading resources and always wrap our code up with try/catch blocks. Typically we want to focus on DirectoryNotFoundException and FileNotFoundException.

Network


The .Net framework enable your apps to communicate over the network. It's then when the System.Net namespace turns interesting and the WebRequest and WebResponse classes vital. There are specific implementations by protocol like HttpWebRequest and HttpWebResponse based on the protocol HTTP. See an usage example here:


Async IO streams


All the classes showed in this post until now are synchronous classes. Working with resources, typically, implies some delays we need to take in consideration. If not we can make our apps looks unresponsive (it happens when an app looks freezes and the user can't perform any action). 

C#5 introduced async/await keywords, this will make the compiler responsible for turning your "synchronous-looking" code into a state machine that handles all possible situations. Many methods have an async equivalent that returns Task or Task<T>, if there's nothing in return or if the method returns a type (T), respectively. The use of this async methods will make your apps more responsible and scalable

Real async makes sure your thread can do other work until the OS notifies the I/O is ready. Bear in mind not to use the await operator multiple times within the same function if the async calls can be run in parallel. The following example shows you how you need to call the await operator at the end of your functions to just wait the minimum possible time, in this case, wait only for the longest call, instead of for each individual.


Today I'm in...

Hi there,

today I'll be attending the AWSome Day in London. A free training course about how to build your essentials skills you need to work with AWS. This is the agenda for the day:



08:30 - 09:00
Registration
09:00 - 09:30
Welcome & Training Kickoff,  Amazon Web Services
09:30 - 10:45
AWS Compute Services
At the completion of this session you will have the core competency and understanding of the following:
·         Create an Amazon Elastic Compute Cloud instance
·         Verify how to use Amazon Elastic Block Storage
Amazon Web Services 
10:45 - 11:00
Break
11:00 - 12:15
AWS Storage Services
At the completion of this session you will have the core competency and understanding of the following:
·         Identify key AWS storage options
·         Describe Amazon Elastic Block Store
·         Create an Amazon Simple Storage Service bucket and manage associated objects
Amazon Web Services 
12:15 - 13:15
Break for Lunch
13:15 - 14:15
AWS Networking Services & Security
At the completion of this session you will have the core competency and understanding of the following:
·         Identify the different AWS compute and networking options
·         Describe what Amazon Virtual Private Cloud is
·         Describe the security measures AWS provides
Amazon Web Services 
14:15 - 15:15
AWS Databases Services
At the completion of this session you will have the core competency and understanding of the following:
·         Describe Amazon DynamoDB
·         Verify the key aspects of Amazon Relational Database Services
·         Execute an Amazon Relational Database Services
Amazon Web Services 
15:15 - 15:30
Break
15:30 - 16:30
AWS Deployment and Management Services
At the completion of this session you will have the core competency and understanding of the following:
·         Identify what is CloudFormation
·         Describe Amazon CloudWatch metrics and alarms
·         Describe Amazon IAM
Amazon Web Services 
16:30 - 17:00
Closing Presentation, Amazon Web Services


Day 14 Programming in C# 70-483

Index


  • Introduction
  • Logging and tracing
  • Profiling
  • Performance counters
  • References

Introduction

This post is part of a series of post to help you prepare the MCSD certification, particularly the certification exam 70-483, based on the book:


You will find all the code available in the following GitHub repository. Lets talk a little bit about threads and how they work using a .Net development environment.

Logging and tracing

  • Tracing refers to monitor the execution of your applications. Typically, you enable it when you want to investigate something. 
  • Logging is always enabled and tracks events occurring within your application, can be an error or just some useful information. In case of something critical occurred you can rise an email to someone.

.Net helps you to trace and log your app with the Debug class within the System.Diagnostics namespace. As its name suggests is only available in debug mode (the ConditionalAttribute with a value of DEBUG is applied to this class). By default, it writes to the Output window, see the following example where if the Debug.Assert fails a message box asks you to retry, abort or ignore.


The TraceSource class can be used in a similar way as before but it gives us more functionality by using its three parameters:
  1. severity of the event happened with the TraceEventType enum:
    1. Critical: most severe
    2. Error: problem handled.
    3. Warning: something unusual
    4. Information: relevant data
    5. Verbose: the loosest
    6. Stop/Start/Suspend/Resume/Transfer: related to the flow of the app
  2. id to group our calls with the event ID number. Our own groups 
    1. 1.000-1.999 Db calls
    2. 10.000-19.000 WS calls
  3. Message displayed.
Example of the TraceSource usage:


By default all is written to the Output window. TraceListeners can help you to change this behavior. Types of TraceListeners:
  • ConsoleTraceListener
  • DelimitedListTraceListener
  • EventLogTraceListener
  • TextWriteTraceListener
  • XmlWriteTraceListener
Example of how to change the default output of a TraceSource using a TraceListener programmaticly:


You can define these listeners within your application configuration file (app.config / web.config) which makes more easier to change it in live environments instead of having to change your code and doing a new deployment.

You can also write your logs directly to the Windows Event Log by using the EventLog class within the System.Diagnostics namespace (remember to run Visual Studio as administrator). The following example will create a new entry in the Windows event log and in the second execution will add an event to that entry. You can open the "Event Viewer" in Windows and look for "MyNewLog" within the "Applications and Service logs".


To read the event log you can do it programmatically by getting an EventLogEntry object and reading its properties. Besides, you can subscribe to changes with the EntryWritten event and be notified when a new entry is added. See examples here:


Profiling

Profiling refers to the amount of memory your apps use, which methods are called, for how long... The Stopwatch class within the System.Diagnostics namespace will help you found bottlenecks in your code by telling you the time spent in certain areas by calling: Start, Stop and Reset methods.

Visual Studio also has a wizard tool to check performance in your apps within the Analyze menu. You will find four options there:

  1. CPU sampling: it's an initial search for performance problems.
  2. Instrumentation: timing information for each function called.
  3. .Net memory allocation
  4. Resource content data: multithreaded application, it helps you find out why methods have to wait until certain resource is released.

Performance counters

The most typical are: CPU usage, memory usage or length of a query, and all can be viewed with the app perfmon.exe provided by Windows. You can read them by code:



Bear in mind you need to be administrator or member of the "Performance Monitor Users" group. As you can see in the example the PerformanceCounter class implements the IDisposable interface because we can use the using statement. The following types can be useful when you plan to create your own performance counters:

  • NumberOfItems32 / NumberOfItems64: number of operations
  • RateOfCountsPerSecond32 / RateOfCountsPerSecond64: calculate the amount per second of an item or operation
  • AverageTimer32
This is all for today and with this post we've finished chapter number 3. I hope you learnt something today and I hope to see you in the next post.

Day 13 Programming in C# 70-483

Index


  • Introduction
  • Build configurations
  • Precompiler directives
  • Program database files and symbols aka PDB
  • References

Introduction

This post is part of a series of post to help you prepare the MCSD certification, particularly the certification exam 70-483, based on the book:


You will find all the code available in the following GitHub repository. Lets talk a little bit about threads and how they work using a .Net development environment.

Build configurations


Default build configurations:
  • Release mode: code is fully optimized and no extra information for debugging is created
  • Debug mode: the compiler inserts extra no-operations (NOP: create a variable that's never used) instructions and branch instructions (a conditional piece of code that's never executed).
An scenario where you can see a big different in the behavior of these two configurations it's when using Timers. This particular class can be scheduled to run again after a period of time but if we trigger the GC this behavior can be prevented.

Precompiler directives


C# doesn't have an specialized preprocessor (applies some changes to your code before handling it off to the compiler), but it supports preprocessor compiler directives:
  • #if: this directive uses the default logical operators as in C# (==, !=, &&, || and !) and evaluates the condition in order to run or not a piece of code. Ex: 
    • #if DEBUG {SOME CODE} #endif
    • #if !WINRT {CODE FOR NO WINRT PLATFORMS}   #endif
  • #define: we can declare our own variables and use them within my #if 's
  • #undef: used to remove debug symbols
  • #warning: message that appears in the output
  • #error: message that appears in the output
  • #line: it modifies the compiler's line number and the file name
    • #line 200: sets the following line to 200.
    • #line default: sets back to the original line number.
  • #line hidden: to hide lines of code, so you can skip pieces of your code
  • #pragma warning disable: starting point of a disabled piece of code
  • #pragma warning restore: finish point of a disabled piece of code
The ConditionalAttribute (Ex: [Conditional("DEBUG")] ) is a more handy way to attach behavior to your methods instead of using preprocessor directives, which it's inconvenient. But for types that don't override ToString, the default implementation shows the name of the type. A solution for that is the DebuggerDisplayAttribute (Ex: [DebuggerDisplay(Name = "SomeText"] ).

Program database files and symbols aka PDB


When running a build process you can specify a full debug or debug with pdb files. First option creates a pdb file and the assembly contains debug information, second option the assembly is not modified and the pdb is created as well (used in release mode). PDB contains:

  1. Source file names and their lines
  2. Local variable names
None of these are stored within assemblies but it will be useful for our debugging process. This helps Visual Studio to provide the following data when you are debugging: 
  1. modules: libraries required to run your app
  2. call stack: calls between classes. Workflow.
  3. debug previous versions of your applications: using Symbol Server (Tools/options/Debuggins/General) you can easily tell your VS to take PDB from previous version of your code that are stored by your TFS.
  4. PDBCopy.exe: helps you split the private data from the public data within your assemblies when you deliver app to the outside world.

Day 12 Programming in C# 70-483

Index


  • Introduction
  • What is an assembly
  • Signing
  • GAC
  • Versioning
  • WinMD
  • References

Introduction

This post is part of a series of post to help you prepare the MCSD certification, particularly the certification exam 70-483, based on the book:


You will find all the code available in the following GitHub repository. Lets talk a little bit about threads and how they work using a .Net development environment.

What is an assembly

An assembly (.dll / .exe) is used to deploy/distribute your apps to other parties. In the past, shipping dlls produced the following problems:
  • dll distributed a new version without fully testing: "DLL hell".
  • the installation proces left traces everywhere: directories, regitry
  • security: third components performed a security issue
Assemblies are: 
  • self contained (not need to write to the registry or other location).
  • language-neutral (VB, c#,...)
  • versoined
  • easy deployable (by copying)

Signing

Types of assemblies:
  • strong-named: these are signed with a public/private key pair. Benefits:
    • Uniqueness
    • Versioning lineage: you have the private key and you are the only one who can distribute updates so your third parties are safe.
    • Strong integrity check
    • Security: these types can only reference other strong named assemblies
    • Remember: All of this only proves that the person who signed the assembly has access to the private key. To allow users to verify you as publisher you need to use Authenticode: technology that uses digital certificates to identify the publisher of an application.
  • regular
How to view the public key in a signed assembly:


  1. cd C:\Program Files\Microsoft SDKs\Windows\v7.1\Binsn -Tp 
  2. "C:\Windows\Microsoft.Net\Framework\v4.0.30319\System.Data.dll"

Output:

   Microsoft (R) .NET Framework Strong Name Utility Version 3.5.30729.1
   Copyright (c) Microsoft Corporation. All rights reserved.
   Public key is
   00000000000000000400000000000000
   Public key token is b77a5c561934e089

Within an organization you need to secure your private key. If any employee have access to it someone might steal it. To avoid this use Delayed or Partial signing. This procedure uses the public key to sign an assembly and you delay using the private key until the project is ready for deployment. You will need to annotate the source code for the assembly with two custom attributes from System.Reflection:

  1. AssemblyKeyFileAttribute: file containing the public key.
  2. AssemblyDelaySignAttribute: enables delay signing.

GAC

Global assembly cache (GAC) helps you to share assemblies by multiple apps enhancing security. Deployment can be done in two ways:
  • In production using Windows Installer 2.0
  • In development with Gacutil.exe

Versioning

Each assembly has a version number using this pattern: {Major version}.{Minor Version}.{Build Number}.{Revision}:

  • Major: breaking changes
  • Minor: small changes in existing features
  • Build number: auto incremented
  • Revision: patches
All this is saved in the AssemblyInfo.cs with a pair of numbers: 
  • AssemblyVersion: incremented manually when you deploy to production.
  • AssemblyFileVersion: incremented with every build by your build server.
After a deployment you end up with multiple versions of your assemblies within the GAC (side-by-side hosting). To match the right assembly you can use:
  • Application configuration files: you can define extra location to find your assemblies. If the location are out of the scope of your app, the assemblies would need to be strongly named.
  • Publisher policies: that are deployed to the GAC so the CLR knows what to bind.
  • Machine configuration

WinMD

With Win8, a new WinRT (runtime) was introduced written in C++, without managed environment, no CLR, no JIT compiler. There was no metadata which is necessary to create mapping between native components and other languages. To make this work Microsoft came up with a new file type called Windows Metadata (WinMD). These files (called Windows Runtime component in VS or .winmd) help you to create pieces of code that can be called from different languages such as Javascript or C#. You only need to follow these restrictions:

  • Fields, parameters and return values of all the public types and members must be Windows Runtime types
  • A public class or interface cannot do
    • be generic
    • implement an interface that is not a Windows Runtime interface
    • Derive from types that are not inside the Windows Runtime
  • Public classes must be sealed
  • Public structures can have only public fields which must be value types or strings
  • All public types must have a root namespace that matches the assembly name and does not start with Windows.