• The problem
  • One (of many) solution
  • Code snippet
  • References

The problem

Network connectivity issues, flaky dependencies (APIs, databases, cloud services...) might became a painfull when you work in a production environment. Let's make your life a bit easier with Polly.

One (of many) solution

Polly is a .NET resilience and transient-fault-handling library that allows developers to express policies such as Retry, Circuit Breaker, Timeout, Bulkhead Isolation, and Fallback in a fluent and thread-safe manner. Polly targets .NET 4.0, .NET 4.5 and .NET Standard 1.1.

The following code snippet allow you to define how your retry will look like based on: number of retries (it works in an exponential way 2, 4, 16 seconds...), retry policy (expected result), action to be retried and exception type on which you want to retry.

Polly provides asynchronous methods so your retries will never block your threads.

Disclaimer: I went for an exponential retry but you can define simple elapsed time to retry on.

Code snippet




Sometimes, when adding new functionality to your programs you might end up having errors difficult to track. This is the case when using DI and you missed to register one of your dependencies.


To avoid errors like the one defined in the intro I have created a unit test that checks that all services registered with the container are resolvable. You might find this helpful if you want to verify in the other endpoints that you've got all components for all services properly registered.

This particular case is using Castle Windsor as DI engine.

Please see here some interesting resources about Docker

Here are some links from Jorge (Sysdig) who gave a great talk on monitoring containers:
How to Monitor Docker Swarm:
Monitoring Kubernetes series:

Slides from the talk:

As promised, Liz (Aqua) also has shared the slides for the great talk on Secrets Management with Docker, Docker Swarm and Aqua. Slides are online at

You may also be interested in a recent blog post (http://blog.aquasec.com/managing-secrets-in-docker-containers) that covers "Managing Secrets in Docker Containers". It also includes a short demo.

The Birthday training materials from Docker are available at

Docker London wouldn't be possible without all of our great sponsors:
  • Docker and HPE (https://www.hpe.com/us/en/home.html) have partnered to help businesses transform and modernize their datacenters with HPE Docker ready servers - a hybrid infrastructure solution that is configured for Docker containers, fully supported by HPE and backed by Docker. Katacoda (https://www.katacoda.com/), Learn Docker, Kubernetes and Cloud Native applications using free interactive tutorials directly in your browser!
  • Hedvig (http://www.hedviginc.com/) provides software-defined storage for enterprises building private, hybrid, or multi-cloud environments. Hedvig is the only storage solution designed for both primary and secondary data which scales in a single cluster across multiple locations, making it ideal for legacy and modern workloads.
  • Rancher (http://rancher.com/) is a complete, open source platform for deploying and managing containers in production. It includes commercially-supported distributions of Kubernetes, Mesos, and Docker Swarm, making it easy to run containerized applications on any infrastructure.
  • AVI Networks (https://www.avinetworks.com/), 30 Seconds to Application Services in Any Data Center or Cloud.
  • Contino (http://www.contino.co.uk/), DevOps & Continuous Delivery Consultancy.
  • Docker (http://www.docker.com/), Build, Ship, and Run Any App, Anywhere.

I've been attending an awesome Agile training for two days provided by some workmates from my workplace: Just Eat. It's being pretty good compared with other training I attended in the past. We've covered a lot of stuff, ie:

  • origins, 
  • why we need agile methodologies/framework, 
  • waterfall vs agile, 
  • Bernard chart
  • User stories (assumptions, acceptance criteria)
  • Cross functional requirements
  • Scrum, XP, Crystal Clear and Lean
  • Ceremonies
  • Story mapping / Spam planning (Goal, activities and stories)
  • Project: design and create a new animal who survives to the extinction of the rest of the animals

Here some pictures and shortly more posts with all the stuff from the training explained.

Happy weekend to everyone!


This week I attended NDC London workshops and one of the most interesting ones was: Running Docker and Containers in Development and Production by Ben Hall. He is the founder of Ocelot Uproar and creator of one of the best resources about Docker I've ever seen in my life: Katacoda.com. Please, take a look at his site if you are interested in this technology because you'll gain a great insight by following the bunch of lessons Ben has created.

Keep learning!




Today I want to share with you a way I found very useful to generate cidrs using the IPNetwork library from GitHub.

I like the approach I took in here as the idea is to hit the biggest subnets with most ips in use an then go down until the smallest subnets with less ips in. I create groups by the first two octets of the ips provided basically because providers always have a range of these ips.


In the following algorithm, I use a completely different approach which I've tested with different IP populations. In one of the test, I passed into the function 55 inconsistent ips (remember my limit is 50 cidrs) which using the original code above ends up returning very wide cidrs (/8, /16). With the new algorithm, I only get /32, which is 100% optimisation update. I know 5 are wasted but this scenario is not real under an Internet provider subnet environment.

In a second test, I passed again real production data. The result is very good as I get rid of all the the "/7" and the nine "/8" (161 millions of ips) and now I only get one "/16" (65k ips) as the worst cidr. This is also a huge improvement considering this is real data and I'm basically getting the same result.

The rest are real ranges but still small (21, 24, 28) and orphaned ips (32). Still room for improvement as we don't reach the AWS limit of 50 by 28 free slots.


Here some code


By moving from the default approach of using IPNetwork.Supernet() method into this, I found out that the generation improved enormously. I moved from 184M available ips in my cidr domain to just 40k available ips. That a HUGE improvemnt right?


I definitely recommend this algorithm to generate cidrs based on ips assigned by a particular provider as these providers typically move in a certain range.