WHAT'S NEW?
Loading...

Cidr generation algorithm (when ips under provider's domain)

Intro

Today I want to share with you a way I found very useful to generate cidrs using the IPNetwork library from GitHub.

I like the approach I took in here as the idea is to hit the biggest subnets with most ips in use an then go down until the smallest subnets with less ips in. I create groups by the first two octets of the ips provided basically because providers always have a range of these ips.

Tests

In the following algorithm, I use a completely different approach which I've tested with different IP populations. In one of the test, I passed into the function 55 inconsistent ips (remember my limit is 50 cidrs) which using the original code above ends up returning very wide cidrs (/8, /16). With the new algorithm, I only get /32, which is 100% optimisation update. I know 5 are wasted but this scenario is not real under an Internet provider subnet environment.

In a second test, I passed again real production data. The result is very good as I get rid of all the the "/7" and the nine "/8" (161 millions of ips) and now I only get one "/16" (65k ips) as the worst cidr. This is also a huge improvement considering this is real data and I'm basically getting the same result.

The rest are real ranges but still small (21, 24, 28) and orphaned ips (32). Still room for improvement as we don't reach the AWS limit of 50 by 28 free slots.

Solution

Here some code

Results


By moving from the default approach of using IPNetwork.Supernet() method into this, I found out that the generation improved enormously. I moved from 184M available ips in my cidr domain to just 40k available ips. That a HUGE improvemnt right?

Conclusion

I definitely recommend this algorithm to generate cidrs based on ips assigned by a particular provider as these providers typically move in a certain range.

Amazon Web Services policies to modify security groups

Intro

In order for a feature in AWS to edit other feature's security groups we need to define a new policy within IAM option.

There are three different "Actions" we want to allow our feature tu use:
  • AuthorizeSecurityGroupIngress
  • RevokeSecurityGroupIngress
  • DescribeSecurityGroups
For the first two Actions we can define a "Resource" where the policy can only take place. For describing there's no resource level limitation. This way we limit the activity of our feature increasing security.

Code

First create a new policy with the name: CanDo_SecurityGroup

Within the policy paste the following piece of code:

As you can see in the snippet above, with the "Condition" module we can define our own restrictions. In this case our feature will only create or remove security groups with a particular tag value ("yourSecurityGroupTagValue").

Let's crack some stuff


Index


  • Intro
  • Video
  • Specifications
  • References

Intro


Today I bring you a video about a very special gadget. This is the Wifi Pineapple built in San Francisco (California) by the guys from Hack5. Very elegant machine to evaluate your own penetration tests.

Basically, what the Pineapple does is act as "man-in-the-middle" between wifi users and endpoints. It's able to automatically disconnect users and make them connect to Internet through you.



Specifications

There are two main Pineapple devices: Nano ($99) and Tetra ($199). Here some specifications for both:
WiFi Pineapple NANO

WiFi Pineapple NANO

The ultimate WiFi pentest companion, in your pocket.
  • 6th generation WiFi Pineapple software featuring PineAP, web interface and modules
  • Dual discrete 2.4 GHz b/g/n Atheros radios
  • Up to 400 mW per radio with included antennas
  • Integrated Power over USB Ethernet Plug
  • Memory expansion via Micro SD (up to 128 GB)
  • Optional mobile EDC Tactical case and battery
  • USB 2.0 Host accessory expansion port

WiFi Pineapple TETRA

WiFi Pineapple TETRA

The amplified, dual-band (2.4/5 GHz) powerhouse.
  • 6th generation WiFi Pineapple software featuring PineAP, web interface and modules
  • Dual discrete 2.4/5 GHz a/b/g/n Atheros 2:2 MIMO radios
  • 4 onboard Skybridge amplifiers
  • Up to 800 mW per radio with included antennas
  • Integrated Power over USB Ethernet Port
  • Integrated Power over USB Serial Port
  • Onboard NAND Flash (2 GB)
  • USB 2.0 Host and RJ45 Ethernet Ports

References


https://www.wifipineapple.com/
https://www.hak5.org

JSLondon meetup

Intro


Today I was attending a Javascript meetup created by the guys from "London Javascript Meetup team" which I found really interesting and I'd like to share with you :)

It was quite cool as they were talking about cutting-edge technologies like virtual reality and cross mobile platform technologies like Ionic (which just released its first v2 RC a few days ago)

I encourage all of you guys to attend this kind of events and join other developers to talk about the cool stuff that is happening at the moment.

Follow this link:


Close for holidays

Yes!!!! It's this time in the year when I disconnect from everything in this world except from beer and sun 🌞

I wish you the best boys and girls and see you in the other side 😃 😃 😃




Amazon Aurora goes Multi-AZ

Index

  • TL;DR
  • Objective
  • Benefits
  • Problem
  • Solution

TL;DR

In order to create a db cluster in front of your multi-az db instances, I found the best way to do that using Route53 Dns records to point to the cluster and hit the dns from my component.

Objective

Amazon RDS Multi-AZ deployments provide enhanced availability and durability for Database (DB) Instances, making them a natural fit for production database workloads. When you provision a Multi-AZ DB Instance, Amazon RDS automatically creates a primary DB Instance and synchronously replicates the data to a standby instance in a different Availability Zone (AZ).

Each AZ runs on its own physically distinct, independent infrastructure, and is engineered to be highly reliable. In case of an infrastructure failure (for example, instance hardware failure, storage failure, or network disruption), Amazon RDS performs an automatic failover to the standby, so that you can resume database operations as soon as the failover is complete. Since the endpoint for your DB Instance remains the same after a failover, your application can resume database operation without the need for manual administrative intervention.

Benefits

Following some benefits of using multi-az databases from database and components perspectives:
  • Enhanced durability: if a storage volume on your primary fails in a Multi-AZ deployment, Amazon RDS automatically initiates a failover to the up-to-date standby. Compare this to a Single-AZ deployment where in case of failure, a user restore operation will be required (can take several hours to complete and any data updates that occurred after the latest restorable time will not be available).
  • Increased availability: if an Availability Zone failure or DB Instance failure occurs, your availability impact is limited to the time automatic failover takes to complete: typically under one minute for Amazon Aurora and one to two minutes for other database engines. The availability benefits of Multi-AZ deployments also extend to planned maintenance and backups. In the case of system upgrades like OS patching or DB Instance scaling, these operations are applied first on the standby, prior to the automatic failover. As a result, your availability impact is, again, only the time required for automatic failover to complete.
  • No administrative intervention: DB Instance failover is fully automatic and requires no administrative intervention. Amazon RDS monitors the health of your primary and standbys, and initiates a failover automatically in response to a variety of failure conditions.
  • Simplify your component: Because you will hit a dns record, there’s no need to use the AWS SDK to discover db clusters/instances anymore. You will end up with cleaner code and no dependency to the Amazon API.

Problem

We start from an app running with a single db and we move to a single cluster pointing to several db instances in different availability zones.

The main problem we had when setting up the Aurora Db Cluster was the lack of support when dealing with clusters from both cloudformation and the app perspective:
  • From the app:
    • On one hand, after spin up the cluster, it was difficult for us to find the endpoint using the AWS SDK. Apparently AWS crops the cluster identifier to something equal or less to 64 characters so you don’t know how it will look like.
    • On the other hand, the aws sdk client does not support filtering and provides results with pagination which makes things more complicated.
  • From cloudformation:
    • Cluster identifiers cannot be defined from the CF templates.

Solution

We came up with a very straight-forward solution after the discovery process which is basically set up a DNS record in front of the db cluster. This way we can have our app pointing to the dns and no needs to worry about finding the cluster through the sdk.

Please see here a couple of code snippets to enable multi-AZ in you environment:
  • To create the db replica and point the cluster to you main instance
  • To create the dns record pointing to the cluster




Some lessons I've learnt integrating .Net Core

Index

  • Intro
  • Lessons

Intro

Every now and then there's a new trending thing in the market: angular 2, webpack, react... Now is the time for me to tell you about .Net Core. It's being a while since it was announced during the Microsoft Build event and the project passed through a chain of important changes.

In this post I want to highlight some of the lessons I've learnt of upgrading to this version of the framework some of the components I typically to work with.

Lessons

1. Using xproj & csproj files together

There doesn’t seem to be any way for these two project types to reference each other. You move everything to xproj, but then you can no longer use MSBuild. If you are like us, that means your current setup with your build server won’t work. It is possible to use xproj and csproj files both at the same time which is ultimately what we ended up doing for our Windows targeted builds of Prefix. Check out our other blog post on this topic: http://stackify.com/using-both-xproj-and-csproj-with-net-core/

2. Building for deployment
If you are planning to build an app that targets non Windows, you have to build it on the target platform. In other words, you can’t build your app on Windows and then deploy it to a Mac. You can do that with a netstandard library, but not a netcoreapp. They are hoping to remove this limitation in the future.

3. NetStandard vs NetCoreApp1.0
What is the difference? NetStandard is designed as a common standard so that .NET 4.5, Core, UWP, Xamarin and everything else has a standard to target. So, if you are making a shared library that will be a nuget package, it should be based on NetStandard. Learn more about NetStandard here: https://github.com/dotnet/corefx/blob/master/Documentation/architecture/net-platform-standard.md
If you are making an actual application, you are supposed to target NetCoreApp1.0 as the framework IF you plan on deploying it to Macs or Linux. If you are targeting Windows, you can also just target .NET 4.5.1 or later.

4. IIS is dead, well sort of
As part of .NET Core, Microsoft (and the community) has created a whole new web server called Kestrel. The goal behind it has been to make it as lean, mean, and fast as possible. IIS is awesome but comes with a very dated pipeline model and carries a lot of bloat and weight with it. In some benchmarks, I have seen Kestrel handle up to 20x more requests per second. Yowzers!
Kestrel is essentially part of .NET Core which makes deploying your web app as easy as deploying any console app. As matter of fact, every app in .NET Core is essentially a console app. When your ASP.NET Core app starts up, it activates the Kestrel web server, sets up the HTTP bindings, and handles everything. This is similar to how self hosted Web Api projects worked with Owin.
IIS isn’t actually dead. You can use IIS as a reverse proxy sitting in front of Kestrel to take advantage of some of it’s features that Kestrel does not have. Things like virtual hosts, logging, security, etc.
If you have ever made a self hosted web app in a Windows service or console app, it all works much differently now. You simply use Kestrel. All the self hosted packages for WebApi, SignalR and others are no longer needed. Every web app is basically self hosted now.

5. HttpModules and HttpHandlers are replaced by new “middleware”
Middleware has been designed to replace modules and handlers. It is similar to how Owin and other languages handle this sort of functionality. They are very easy to work with. Check out the ASP.NET docs to learn more. The good (and bad) news is you can’t configure them in a config file either. They are all set in code.

6. FileStream moved to System.IO.FileSystem ???
Some basic classes that everyone uses on a daily basis have been moved around to different packages. Something as common as FileStream is no longer in the System.IO assembly reference/package. You now have to add the package System.IO.FileSystem. This is confusing because we are using class namespaces that don’t directly match the packages.
This website is very valuable for figuring out where some classes or methods have been moved around to: http://packagesearch.azurewebsites.net/

7. StreamReader constructor no longer works with a file path
Some simple uses of standard libraries have changed. A good example is the StreamReader which was often used by passing in a file path to the constructor. Now you have to pass in a stream. This will cause small refactorings to use a FileStream in addition to the StreamReader everywhere.
Another good example of this is around reflection. GetType() now returns a more simplified object for performance reasons and you must do a GetTypeInfo() to get the full details. Luckily that is backwards compatible to .NET 4.5

8. Platform specific code… like Microsoft specific RSA
.NET Core is designed to run on Windows, Macs and Linux. But some of your code could potentially compile on Windows but then fail at runtime when you run it on a Mac or Linux. A good example of this is RSACryptoServiceProvider which appears to be useable. At runtime on a Mac you will get a “platform not supported” type exception. Evidently this RSA provider API is Windows specific. Instead you have to use RSA.Create() which is a more generic implementation and has slightly different methods. Both are in System.Security.Cryptography. Confusing huh? The old “If it builds, ship it!” mentality totally falls apart on this one!

9. Newtonsoft changed to default to camel case on field names 🙁
This has to be one of the biggest headaches of the conversion. Newtonsoft now defaults to camelCase. This will cause all sorts of REST APIs to break if you were using PascalCase. We ended up using the JsonProperty attribute on some things to force their casing how we needed them. This one is a big land mine, so watch out for it. #TeamPascalCase

10. Log4net doesn’t work and neither do countless other dependencies, unless you target .NET 4.5!
Log4net is a pretty fundamental library used by countless developers. It has not been ported to core, yet. NLog and Serilog work and you will have to switch logging providers. Before converting anything to core, you need to review all of your referenced dll dependencies to ensure they will work with core. But as long as you are targeting Windows, you can target .NET 4.5.1 or newer and use all your current dependencies! If you have to go cross platform… watch out for dependency problems.

11. System.Drawing doesn’t exist
Need to resize images? Sorry.

12. DataSet and DataTable doesn’t exist
People still use these? Actually, some do. We have used DataTables for sending a table of data to a SQL stored procedure as an input parameter. Works like a charm.

13. Visual Studio Tooling
We have seen a lot of weirdness with intellisense and Visual Studio in general. Sometimes it highlights code like it is wrong, but it compiles just fine. Being able to switch between your framework targets is awesome for testing your code against each. Although we just removed net451 as a target framework from my project, but Visual Studio still thinks we are targeting it…. There are still a few bugs to be worked out.

14. HttpWebRequest weird changes
In .NET 4.5 there are some properties you have to set on the HttpWebRequest object and you can’t just set them in the headers. Well, in core they decided to reverse course and you have to use the header collection. This requires some hackery and compiler directives… Otherwise you get errors like this from your .NET 4.5 code: The ‘User-Agent’ header must be modified using the appropriate property or method. We need some extension methods for core to make it backwards compatible.

15. Creating a Windows Service in .NET Core
It is possible to make your app run as a Windows Service but it is done differently. In .NET 4.5 you would make a class that inherits from ServiceBase and also create an Installer class to assist in installing. In core they just expect you to use the command line to install your service.
This Stack Overflow does a good job covering the topic: http://stackoverflow.com/questions/37346383/hosting-asp-net-core-as-windows-service

BONUS – Database access
Low level access via SqlConnection and SqlCommand works the same. My favorite two ORMs, Dapper and Entity Framework, both work with core. Entity Framework Core has quite a few differences. Learn more here: https://docs.efproject.net/en/latest/efcore-vs-ef6/features.html

BONUS – Need help?
If you are working with .NET core and have questions, try joining community Slack account: http://aspnetcore.slack.com at http://tattoocoder.com/aspnet-slack-sign-up/

Testing sofwater - Find the next port available

Index

  • Intro
  • Problem
  • Solution


Intro

Nowadays, tests are a mandatory thing to assure the quality of your code. For this reason, developers typically set up CI environments where they can run any sort of tests: unit, functional, integration...

Problem

This one is a very small post about something I found very interesting and I want to preserve it for the future. I end up with some functional tests running on my CI boxes that were failing all the time. After some investigation I found the reason. It was produced by an unexpected error in the build machine that was preventing the release of the port I use for testing.

I think this might be an scenario where you can find yourself, specially with functional tests where you need to probe certain behaviour with frameworks like PhantomJs and so...

Snippet

Please see here below a small code snippet to find the next available port:



Asp .Net JavaScript Services by Steve Sanderson

Intro


Today I don't have the energy to write an entire post so I think I'm going to cheat :)

But no worries, I've brought really cool stuff this time offered by our friends from the ASP.Net Community Stand up group. As you may be already aware, this guys are basically the heads of Microsoft framework development group. In their last stand up they've talked about a new beta tool which seems to be the next "new thing".

Today, our guest is Steve Sanderson and he's working on the "ASP .Net JavaScript Services" which is in alpha/beta phase right now. What Steve has been working on is a new tool to make you life easier if you are working with Angular/Knockout/React frameworks. Some of the features demo today are:


  1. What we can do for SPA
  2. Examples
    1. Angular 2
      1. Generator: this is a Yeoman generator for .Net Core
      2. Webpack dev middleware - HMR
      3. Prod builds (Caution! this is cool stuff)
      4. Prerendering (This too!)
    2. React
      1. State-preserving HMR - direct editing
      2. Maybe Redux debugging
    3. Other features
      1. Cache priming, validation, lazy-loading


JustSaying - talking with queues

Index

  • Intro
  • Some prep
  • References
  • Messages
  • Subscriber
  • Publisher
  • Configuration
  • Others
  • Final gift

Intro

JustSaying is a light-weight service bus on top of AWS (nugget) created by the guys from JustEat. It is used to communicate with AWS queues. You can subscribe to a queue or publish into SNS (push notifications) in just a few lines of code.
Recently v4 has been released with cool new features like asynchronous message handlers and many more.

Some prep

To start working with AWS I recommend you to install the AWS developer tools and configure it with your credentials. These are defined in AWS as a token and a secret key. You can place these credentials in your app/web.config file, system environmental variables, in your console, in a file defined in a particular folder or even in your machine.config file.


Messages

This entities represent the unit of work (POCO) for JustSaying. You can define your own entities inheriting the Message class as displayed here:

Subscriber

Now let's subscribe to a queue. This is step and the publish registration can be done in the same place in your application.



Publisher

Now it's time to tell JustSaying which messages we want to publish. The topic is defined in the message name:

Configuration


At this step, we can define how our handlers will manage the messages from the queues. In this example, we keep message to the default life time, 1 minute. After that time, if no handler take care of the message, it will be throw away (not really, it will be keep in an 'error' queue). We are telling 'OrderFailed' to keep messages for 5 min. and not handle them again on failure for 60 seconds.


Others


There are more stuff we can configure here, for example we can define a throttling mechanism to define a limit in the way JustSaying processed messages.

By default, error when handling messages are stored in separate queues, typically named the same
with a "-error" suffix. You can opt out this configuration if you don't want to take care of those errors.

In case you want to intercept any error from the handler when processing messages you can pass an Action into the "OnError" event within the ConfigureSubscriptionWith() fluent method:


Final gift

Finally, before leaving you in peace, I have a present for you with the following post from Just Eat technology blog:

http://tech.just-eat.com/2014/07/24/opensourced-justsaying-aws-messagebus/

Enjoy and keep coding!

References

http://www.just-eat.com/
https://github.com/justeat/JustSaying
https://www.nuget.org/packages/JustSaying/

Implementing cloud desing patterns for AWS

Index

  • Apologize
  • Intro
  • Design patterns


Apologize

I know I know... It's been a while without posting anything here, but it was for a good reason. I change my job more than a month ago and that (plus other things) got me entertained for some time. Apart I'm working on a personal project which takes most of my time but I promise I'll be back again really soon. Promised!

Intro

As I said before, I'm working in a new place and Amazon Web Services is heavy here. My aim is to post more about this wonderful service. A part from AWS, I'm also evaluating other PAAS providers like Heroku so you will find posts about this too.

Design patterns

I want to use this post as an index for some of the most important patterns defined in this book:

Please see here the complete list with some links to the ones I consider more useful:
  1. Basic patterns
  2. Patterns for high availability
  3. Patterns for processing static data
  4. Patterns for processing dynamic data
  5. Patterns for uploading data
  6. Patterns for databases
  7. Patterns for data processing
  8. Patterns for operation and maintenance
  9. Patterns for networking
 
That's all for today, I wish you the best guys and stay tuned!

Boost performance in your apps with Prefix (Stackify)

Index

  • Intro
  • Hidden exceptions
  • Serialiation
  • SSL Overhead
  • Garbage collection
  • SQL performance
  • References

Intro

Here you will find some useful tips to improve the performance in your web applications based on Matt Watson webinar from Stackify. This is not the typical stuff about how to build your queries or caching calls to your backend. This is more the next things you should be taking care of.

Besides, today I want to introduce you to Prefix. A free tool that runs on the developers machine (not in servers) which helps you understanding where your apps spend most of its time.

Hidden exception

In our apps, exceptions happen even when nothing is notified to us. Take a look at the "Output view" in Visual Studio when you spin up your new MVC project. You will see some exceptions poping up there. This "hidden" exceptions required a handling process we should get rid of.

Add break on first time exceptions in your project configuration and you will start getting this like: first time you access the memory cache provider will trigger an exception because there's not a performance counter assigned to it.


Automatically log First time exceptions and Unhandled Exceptions with the following snippet:



Now I'm getting the whole stack trace from the exceptions triggered behind the scenes. Another best practice is the usage of TryParse functions when working with transformations.


Exceptions best practices:
  1. Avoid them at all costs
  2. Find hidden exceptions with Prefix
  3. Track all first chance exceptions to hunt them down
  4. Aggregate all exceptions in production

Serialization

Making calls to web services or databases and waiting for the answer takes a lot of time. My recommendation it's always check the database query or the process that happens on the backend. But when that bit can't be improved more (theoretically), then it's the time to talk about how the information travels from one place to another.

Web requests typically have the following steps:
  1. Serialize request
  2. Send request
  3. Receive response headers
  4. Download response
  5. Deserialize response.
You can get a really good insight of the time for each of those steps by using a tool like Prefix. Just install and it will start monitoring the traffic on your local dev box. Take a look at the following picture where a data binding happens and spends 119 ms. just on that.


A good approach on web services is always the use of asynchronous operations so the interface will respond the user commands. Check these two code methods where I'm passing an input parameter on the first one and I'm waiting for the interface to send an asynchronous string on the second one.




I found this table very useful to help you understand how long it takes the serialization process in different frameworks. Keep an eye on MVC serialization process when using objects as it's the slowest one.

Serialization best practices:
  1. Understand payload size
  2. Customize JSON serializers
  3. Consider manually reading incoming data

SSL Overhead

Broadly used across many applications, the security handler runs on your machines and takes care of the user validation and message encryption.



SSL best practices:
  1. Offload SSL to load balancer or hardware if possible (Netscaler, F5, etc.)
  2. User Azure Application Gateway for SSL offloading
  3. AWS use ELB with SSL

Garbage collection

One key metric to monitor is called "CLR Memory % Time in GC" which comes out of the box with Stackify.

GC best practices:
  1. Avoid the large object heap
    1. Use streams where you can (example above)
    2. Avoid large strings
  2. Microsoft new feature: Server vs workstation GC mode
  3. Monitor GC performance counters

SQL Performance

In the following example I validate the execution time for a SQL query which takes a few ms. to be executed. Notice on the bottom right corner it takes 6 seconds to download. This is very important to understand as your current performance analysis reports might be wrong.


This is why we need to understand not only how long it takes to run a query, we need the download time for the user to get the query on its computer (serialization).

SQL best practices:

  1. SQL server performance is not exactly the same as real world performance
  2. Your SQL performance reports are probably wrong
  3. Use your logging or Prefix to understand real world performance.

References

Creating a local SSL certificate for local.mysite.com

Index

  • Intro
  • Solution
  • References

Intro



I've been working on the integration of a payment service provider (Amazon) and oAuth system (Facebook, Twitter, Google,...) for different projects and in some cases, you are forced to provided a different URL than "localhost" to those provider in order to get access to their API.

In this tutorial I'll show you how to create a local SSL certificate and how to host your web site in a different URL than https://localhost:XXXX.

Solution

Create local SSL certificate and configure VS:
  1. Add entry to hosts file
  2. 127.0.0.1 local.mysite.com
  3. Edit project applicationhost.config and add new binding
  4. Run in VS developer command prompt (ensure you run as an administrator):


  5. Start > Run > MMC.exe
  6. File > Add/remove snap-in
  7. Select certificates and then local computer
  8. Find the certificate under Personal > Certificates
  9. Double click and get the thumbprint under the details tab. It should look like below:
  10. 68 98 9c 08 09 70 88 67 1c 0f 08 a5 9b 0a 74 9a 9b 1e a5 65
  11. Remove all spaces e.g.
  12. 68989c08097088671c0f08a59b0a749a9b1ea565
  13. Back in developer command prompt, run (ensure the certhash is replaced with the one you got from above. The AppId can be anything so keep it the same as below)


  14. Add urlacl entry (important to keep trailing slash on url)

  15. Move the certificate in MMC.exe from the personal to trusted folder
  16. Test the URL https://local.mysite.com:44300/. It should now work with a valid SSL certificate

That's all for today folks! I hope you found this interesting and helpful in your next projects. See ya!

References

Migrate data to Azure SQL

Index

  • Intro
  • Problem
  • Solution
  • References

Intro

Today I want to show you a handy way to move data between databases hosted on premise/cloud providers.


Problem

Normally, I host my databases locally on my development machine. This is the typical approach for every single developer in the world I guess, unless you need to tackle a replica for the  production environment.

At some point, I have to take that database and restore it on the cloud or other data storage service provider. For this particular example I'll show you how to use a tool to move data from a local MSSQL server instance to an Azure SQL database.

Solution

BCP tool is a commnad line application which enables you to export and import data from SQL instances. I found this tool very useful when uploading data to Azure SQL database is required. The performance provided by BCP is great allowing me to upload 60.000 rows from my laptop to Azure in less than a minute. This always might depend on the speed of your network connection.

Following a couple of examples for export/import data.
  • Export


See in this previous example how I export all the information hosted in the table called "MyTable" into a .dat file within a temporary file called "data.dat".

With "-T" I'm defining "Integrated security" as this is a local host database.
  • Import


In the example before I'm using basic authentication with the parameters "-U" for user and "-P" for password.

See here a list of option you can use with BCP command line tool:

bcp [database_name.] schema.{table_name | view_name | "query"
    {in data_file | out data_file | queryout data_file | format nul}
  •    [-a packet_size]
  •    [-b batch_size]
  •    [-c]
  •    [-C { ACP | OEM | RAW | code_page } ]
  •    [-d database_name]
  •    [-e err_file]
  •    [-E]
  •    [-f format_file]
  •    [-F first_row]
  •    [-h"hint [,...n]"]
  •    [-i input_file]
  •    [-k]
  •    [-K application_intent]
  •    [-L last_row]
  •    [-m max_errors]
  •    [-n]
  •    [-N]
  •    [-o output_file]
  •    [-P password]
  •    [-q]
  •    [-r row_term]
  •    [-R]
  •    [-S [server_name[\instance_name]]
  •    [-t field_term]
  •    [-T]
  •    [-U login_id]
  •    [-v]
  •    [-V (80 | 90 | 100 | 110 | 120 | 130 ) ]
  •    [-w]
  •    [-x]
  •    /?


References

Download Microsoft Command Line Utilities 11 for SQL Server
Microsoft Blog tutorial
Tutorial for using BCP

Today I'm in... The Microsoft Enterprise Mobility event

Intro


I'll be meeting today with some engineers from Microsoft about enterprise mobility. The main topic for the day will be Identity, Devices, Apps & Security, so I'm sure there's a whole bunch of things to talk with these folks.

Agenda

TIMINGS:
AGENDA:
09:00-10:30
Microsoft Azure Active Directory - Identity & Access Management In The Cloud
10:30-10:45
BREAK
10:45-12:15
Microsoft Intune – Mobile Device & Application Management
12:15-12:45
LUNCH
12:45-14:15
Microsoft Azure Rights Management - Protect Your Information Wherever It Goes
14:15-14:30
BREAK
14:30-16:00
Microsoft Azure RemoteApp - Run Windows Apps Anywhere


After the meeting...

Please see here enclosed the overview of the presentation with a list of the services provided: