WHAT'S NEW?
Loading...

Close for holidays

Yes!!!! It's this time in the year when I disconnect from everything in this world except from beer and sun ๐ŸŒž

I wish you the best boys and girls and see you in the other side ๐Ÿ˜ƒ ๐Ÿ˜ƒ ๐Ÿ˜ƒ




Amazon Aurora goes Multi-AZ

Index

  • TL;DR
  • Objective
  • Benefits
  • Problem
  • Solution

TL;DR

In order to create a db cluster in front of your multi-az db instances, I found the best way to do that using Route53 Dns records to point to the cluster and hit the dns from my component.

Objective

Amazon RDS Multi-AZ deployments provide enhanced availability and durability for Database (DB) Instances, making them a natural fit for production database workloads. When you provision a Multi-AZ DB Instance, Amazon RDS automatically creates a primary DB Instance and synchronously replicates the data to a standby instance in a different Availability Zone (AZ).

Each AZ runs on its own physically distinct, independent infrastructure, and is engineered to be highly reliable. In case of an infrastructure failure (for example, instance hardware failure, storage failure, or network disruption), Amazon RDS performs an automatic failover to the standby, so that you can resume database operations as soon as the failover is complete. Since the endpoint for your DB Instance remains the same after a failover, your application can resume database operation without the need for manual administrative intervention.

Benefits

Following some benefits of using multi-az databases from database and components perspectives:
  • Enhanced durability: if a storage volume on your primary fails in a Multi-AZ deployment, Amazon RDS automatically initiates a failover to the up-to-date standby. Compare this to a Single-AZ deployment where in case of failure, a user restore operation will be required (can take several hours to complete and any data updates that occurred after the latest restorable time will not be available).
  • Increased availability: if an Availability Zone failure or DB Instance failure occurs, your availability impact is limited to the time automatic failover takes to complete: typically under one minute for Amazon Aurora and one to two minutes for other database engines. The availability benefits of Multi-AZ deployments also extend to planned maintenance and backups. In the case of system upgrades like OS patching or DB Instance scaling, these operations are applied first on the standby, prior to the automatic failover. As a result, your availability impact is, again, only the time required for automatic failover to complete.
  • No administrative intervention: DB Instance failover is fully automatic and requires no administrative intervention. Amazon RDS monitors the health of your primary and standbys, and initiates a failover automatically in response to a variety of failure conditions.
  • Simplify your component: Because you will hit a dns record, there’s no need to use the AWS SDK to discover db clusters/instances anymore. You will end up with cleaner code and no dependency to the Amazon API.

Problem

We start from an app running with a single db and we move to a single cluster pointing to several db instances in different availability zones.

The main problem we had when setting up the Aurora Db Cluster was the lack of support when dealing with clusters from both cloudformation and the app perspective:
  • From the app:
    • On one hand, after spin up the cluster, it was difficult for us to find the endpoint using the AWS SDK. Apparently AWS crops the cluster identifier to something equal or less to 64 characters so you don’t know how it will look like.
    • On the other hand, the aws sdk client does not support filtering and provides results with pagination which makes things more complicated.
  • From cloudformation:
    • Cluster identifiers cannot be defined from the CF templates.

Solution

We came up with a very straight-forward solution after the discovery process which is basically set up a DNS record in front of the db cluster. This way we can have our app pointing to the dns and no needs to worry about finding the cluster through the sdk.

Please see here a couple of code snippets to enable multi-AZ in you environment:
  • To create the db replica and point the cluster to you main instance
  • To create the dns record pointing to the cluster




Some lessons I've learnt integrating .Net Core

Index

  • Intro
  • Lessons

Intro

Every now and then there's a new trending thing in the market: angular 2, webpack, react... Now is the time for me to tell you about .Net Core. It's being a while since it was announced during the Microsoft Build event and the project passed through a chain of important changes.

In this post I want to highlight some of the lessons I've learnt of upgrading to this version of the framework some of the components I typically to work with.

Lessons

1. Using xproj & csproj files together

There doesn’t seem to be any way for these two project types to reference each other. You move everything to xproj, but then you can no longer use MSBuild. If you are like us, that means your current setup with your build server won’t work. It is possible to use xproj and csproj files both at the same time which is ultimately what we ended up doing for our Windows targeted builds of Prefix. Check out our other blog post on this topic: http://stackify.com/using-both-xproj-and-csproj-with-net-core/

2. Building for deployment
If you are planning to build an app that targets non Windows, you have to build it on the target platform. In other words, you can’t build your app on Windows and then deploy it to a Mac. You can do that with a netstandard library, but not a netcoreapp. They are hoping to remove this limitation in the future.

3. NetStandard vs NetCoreApp1.0
What is the difference? NetStandard is designed as a common standard so that .NET 4.5, Core, UWP, Xamarin and everything else has a standard to target. So, if you are making a shared library that will be a nuget package, it should be based on NetStandard. Learn more about NetStandard here: https://github.com/dotnet/corefx/blob/master/Documentation/architecture/net-platform-standard.md
If you are making an actual application, you are supposed to target NetCoreApp1.0 as the framework IF you plan on deploying it to Macs or Linux. If you are targeting Windows, you can also just target .NET 4.5.1 or later.

4. IIS is dead, well sort of
As part of .NET Core, Microsoft (and the community) has created a whole new web server called Kestrel. The goal behind it has been to make it as lean, mean, and fast as possible. IIS is awesome but comes with a very dated pipeline model and carries a lot of bloat and weight with it. In some benchmarks, I have seen Kestrel handle up to 20x more requests per second. Yowzers!
Kestrel is essentially part of .NET Core which makes deploying your web app as easy as deploying any console app. As matter of fact, every app in .NET Core is essentially a console app. When your ASP.NET Core app starts up, it activates the Kestrel web server, sets up the HTTP bindings, and handles everything. This is similar to how self hosted Web Api projects worked with Owin.
IIS isn’t actually dead. You can use IIS as a reverse proxy sitting in front of Kestrel to take advantage of some of it’s features that Kestrel does not have. Things like virtual hosts, logging, security, etc.
If you have ever made a self hosted web app in a Windows service or console app, it all works much differently now. You simply use Kestrel. All the self hosted packages for WebApi, SignalR and others are no longer needed. Every web app is basically self hosted now.

5. HttpModules and HttpHandlers are replaced by new “middleware”
Middleware has been designed to replace modules and handlers. It is similar to how Owin and other languages handle this sort of functionality. They are very easy to work with. Check out the ASP.NET docs to learn more. The good (and bad) news is you can’t configure them in a config file either. They are all set in code.

6. FileStream moved to System.IO.FileSystem ???
Some basic classes that everyone uses on a daily basis have been moved around to different packages. Something as common as FileStream is no longer in the System.IO assembly reference/package. You now have to add the package System.IO.FileSystem. This is confusing because we are using class namespaces that don’t directly match the packages.
This website is very valuable for figuring out where some classes or methods have been moved around to: http://packagesearch.azurewebsites.net/

7. StreamReader constructor no longer works with a file path
Some simple uses of standard libraries have changed. A good example is the StreamReader which was often used by passing in a file path to the constructor. Now you have to pass in a stream. This will cause small refactorings to use a FileStream in addition to the StreamReader everywhere.
Another good example of this is around reflection. GetType() now returns a more simplified object for performance reasons and you must do a GetTypeInfo() to get the full details. Luckily that is backwards compatible to .NET 4.5

8. Platform specific code… like Microsoft specific RSA
.NET Core is designed to run on Windows, Macs and Linux. But some of your code could potentially compile on Windows but then fail at runtime when you run it on a Mac or Linux. A good example of this is RSACryptoServiceProvider which appears to be useable. At runtime on a Mac you will get a “platform not supported” type exception. Evidently this RSA provider API is Windows specific. Instead you have to use RSA.Create() which is a more generic implementation and has slightly different methods. Both are in System.Security.Cryptography. Confusing huh? The old “If it builds, ship it!” mentality totally falls apart on this one!

9. Newtonsoft changed to default to camel case on field names ๐Ÿ™
This has to be one of the biggest headaches of the conversion. Newtonsoft now defaults to camelCase. This will cause all sorts of REST APIs to break if you were using PascalCase. We ended up using the JsonProperty attribute on some things to force their casing how we needed them. This one is a big land mine, so watch out for it. #TeamPascalCase

10. Log4net doesn’t work and neither do countless other dependencies, unless you target .NET 4.5!
Log4net is a pretty fundamental library used by countless developers. It has not been ported to core, yet. NLog and Serilog work and you will have to switch logging providers. Before converting anything to core, you need to review all of your referenced dll dependencies to ensure they will work with core. But as long as you are targeting Windows, you can target .NET 4.5.1 or newer and use all your current dependencies! If you have to go cross platform… watch out for dependency problems.

11. System.Drawing doesn’t exist
Need to resize images? Sorry.

12. DataSet and DataTable doesn’t exist
People still use these? Actually, some do. We have used DataTables for sending a table of data to a SQL stored procedure as an input parameter. Works like a charm.

13. Visual Studio Tooling
We have seen a lot of weirdness with intellisense and Visual Studio in general. Sometimes it highlights code like it is wrong, but it compiles just fine. Being able to switch between your framework targets is awesome for testing your code against each. Although we just removed net451 as a target framework from my project, but Visual Studio still thinks we are targeting it…. There are still a few bugs to be worked out.

14. HttpWebRequest weird changes
In .NET 4.5 there are some properties you have to set on the HttpWebRequest object and you can’t just set them in the headers. Well, in core they decided to reverse course and you have to use the header collection. This requires some hackery and compiler directives… Otherwise you get errors like this from your .NET 4.5 code: The ‘User-Agent’ header must be modified using the appropriate property or method. We need some extension methods for core to make it backwards compatible.

15. Creating a Windows Service in .NET Core
It is possible to make your app run as a Windows Service but it is done differently. In .NET 4.5 you would make a class that inherits from ServiceBase and also create an Installer class to assist in installing. In core they just expect you to use the command line to install your service.
This Stack Overflow does a good job covering the topic: http://stackoverflow.com/questions/37346383/hosting-asp-net-core-as-windows-service

BONUS – Database access
Low level access via SqlConnection and SqlCommand works the same. My favorite two ORMs, Dapper and Entity Framework, both work with core. Entity Framework Core has quite a few differences. Learn more here: https://docs.efproject.net/en/latest/efcore-vs-ef6/features.html

BONUS – Need help?
If you are working with .NET core and have questions, try joining community Slack account: http://aspnetcore.slack.com at http://tattoocoder.com/aspnet-slack-sign-up/

Testing sofwater - Find the next port available

Index

  • Intro
  • Problem
  • Solution


Intro

Nowadays, tests are a mandatory thing to assure the quality of your code. For this reason, developers typically set up CI environments where they can run any sort of tests: unit, functional, integration...

Problem

This one is a very small post about something I found very interesting and I want to preserve it for the future. I end up with some functional tests running on my CI boxes that were failing all the time. After some investigation I found the reason. It was produced by an unexpected error in the build machine that was preventing the release of the port I use for testing.

I think this might be an scenario where you can find yourself, specially with functional tests where you need to probe certain behaviour with frameworks like PhantomJs and so...

Snippet

Please see here below a small code snippet to find the next available port: