Monthly Archives: October 2014

Security+ Series Part 8: 3rd Party Integration Risks

Our Security+ fast track continues, and in this article we will look at terms that are used in conjunction with 3rd party integration and associated risks.

Sooner or later your company grows larger and start to make business with other companies. Your business applications and data needs to be accesses by your partners and customers.

This lesson will teach you about concepts that can help you deal with data protection when such situation happens.

Lets get going.

On-boarding/Off-boarding business partners

Well the name says it all. The best practice is to have a policy or procedure defined when such event occurs. This could encompass what kind of access is needed, to what data at what circumstances. For example you can setup a secure VPN for your supplier.

There is a certain procedure when kids are being on boarded onto the school bus

Social media networks and/or applications

Social networks  are phenomenon of our time. If you use them right way they can offer you great benefits. Your marketing department can use these media for marketing company, product promotion or general feedback from customers.

There are however also some risk associated with social network missuses. If your employees are not trained well they could accidental leak private information.

Your policy should also outline how to use these media the proper way.

Starbucks using Facebook for promoting pumpkin spice latte

Interoperability agreements

There are several agreements that can be signed between two entities when they decided to work together for common goal. Here are the most often used:

There can be a lot of agreements between two parties


Service Level Agreement is a formal document between two parties that defines what service is being offered. For example when you order a MPLS VPN service to meet you branch connectivity needs, you will agree with provider what level of service you have in terms of access rates, quality of service, service availability.


Business partner agreement is yet another document that can be signed between partners when you decide to go do business together. It may contain things like profit sharing, cost sharing and so on.


A less bilateral document called Memorandum of Understanding describes a gentlemen agreement between two companies that plan to do a business together. It outlines what they are trying to accomplish together.


Interconnection security agreement is a document that mandates what actions needs to be taken when connecting or disconnecting to a business partner. It focuses on technology side of the partnership. An example can be found here.

Privacy considerations

When you have many partners and customers, you need to make sure that the data they are working with are safe and confidential. You would not like of partner A could access partners B data, or use your network as a transit.

Risk awareness

Before we can mitigate the risks we first need to aware of it. Risk awareness training is important not only for your own staff, but for partners and suppliers. They can all help you solidify the integrity of your company.

Unauthorized data sharing

When you are working with a partner make sure you are only giving access to data that are needed to complete the workflow. This way you minimize the risk of unauthorized data sharing. You also need to be clear on how your partner will protect your data within their infrastructure.

Data is slowly leaking out of your data pipes just like water.

Data ownership

When you working on a project for a customer you may often involve some of your partners to deliver sub-service. For example your company may take care of server part while your partner will deliver the network infrastructure.

In such case you need to agree where you will store project documents such as sales orders, design documents, configuration scripts and others. You also need to decide who will present these documents to end customers – this may for example affect the document form, logos, forms and so on.

Data backups

As I mentioned above, when data ownership is sorted out, it is vital to agree who will protect the data against loss. Usually the data owner is responsible for this part.

Follow security policy and procedures

As my colleagues would say, stay calm and carry on. At your company you have certain procedures how to handle data, perhaps depending on security level. Make sure that your business is also aware and follows the policy when handling the data.

Review agreement requirements to verify compliance and performance standards 

When you have everything written, make sure that you and your partner know and understand the requirements for using data that he is access. Performance standards can describe what level of resources will the partner have, for example in virtualized environments this can encompass the pool of RAM, CPU or Storage.

And we came to a very end of this article. As I always I hope you learned something useful and see you in next post in the series which will cover some strategies to reduce risks.


Security+ Series Part 7: Risk Calculation

Welcome back to Security+ series.. In this post we are going to explore some techie and non-techie terms that will help us argue with our management to get some funding to get the security ball rolling.

We all know how important is to keep our stuff safe and available but sometimes that feeling itself  is not enough to convince our stakeholders to give us some money to make it happen.

That is why is important to provide some real numbers. And this is the purpose if this series. Putting risk and math together.

Risk Calculation

Risk describes the likelihood that a weakness in the system will be successfully exploited. For example Heartbleed or ShellShock are examples of vulnerabilities with very high risk. Simple because so many systems were vulnerable and the impact is high. It companies would definitely invest time and money to fix this issue asap otherwise they could loose a lot of reputation and money. If you speak to management, always quantify in numbers (meaning $$$), they will listen you more closely.

One example would be to justify a build of a disaster site in case of primary data center failure. The capital and operational expanse might be high, but in case of primary DC failure the service and therefore financial loss can be even higher, not mentioning loosing customers.

Alain Robert the real life spiderman has risk under his control


Likelihood is the probability that a vulnerability will be exploited. For example a likelihood stealing data through SQL injection is much higher physically compromising the database server. Simple because everybody on the internet can play with your web app, but not many of those folks have guts to pull out a social engineering tactics to get to your premises physically.

There is a likelihood to not walk away alive from after this game


Do you remember on game called space impact which was epic on Nokia 3310? You were in space ship shooting down aliens and at the end of each level a big boss would appear.

When you destroy few of those small alien ships nothing fancy would happen, but when you default the boss, boy that was a huge impact for aliens.

The same is true with security, if your DB get comprised you are in big trouble, much bigger if someone would root your counter strike server, because they usually not hold sensitive data, only provide a presentation layer.

Space impact helps you understand the impact


Single Loss Expectancy is the cost associated with certain type of unwanted event. For example if your hard drive fail and you do not have a backup the cost may be higher than just the price for a new drive. The cost will include any lost data which you need to re-create at the best case, at worst case they are lost forever. The SLE is represented in cash.


Annualized Rate of Occurance, as the name implies it describes how often does the unwanted event occur. Does your HDD fail twice a year or once per 5 years. It is important to know because sometimes the risk cost may be lower than cost associated to eliminate risk. For example if all your important files are already backup and only system files could be lost, well in that case installing 2nd HDD and enabling RAID in every client machine would not be cost effective. ARO is usually describes as event per year, for example if event occurs twice a  year the ARO would be 2.


Annualized Loss Expectancy the number you get when you multiple Single Loss Expectancy and Annualized Rate of Occurrence. It gives you better overview how much will cost you to mitigate certain risk.

For example if you loose main power to your production gear twice a year and this event cost you $10000 in loss of revenue. The ALE would be $10000 * 2 = $20000.

In such case it would be wise to invest in UPS device or 2nd power feed.


Mean Time To Repair describes how long it will take to restore the service to way it was. For example if you run out of toner, how long it will take to install new one? If it take just a few minutes because you have spares on site that is perfectly fine. But you have no spare you need to quote and order one, it may take a week to get the printer up an running. Execs would not be happy that they need to wait a week to print a financial report for a meeting.


Everything fails, do not argue about that. Rather than question how often does it fail? Mean Time Between Failure can give you an estimation. Vendors usually list a value with their product. For example Cisco states that their Catalyst 2960G-48TC-L will likely to fail every 221 432 hours. Usually what fails most of the time is the power supply, therefore for critical devices aim for at least 2 PS units.


Mean Time to Fail is very similar to MTBF, the difference here is that MTTF is relate to products that are not usually reparable. For example some micro compoments of a larger system, a capacitor for example it could have certain number of cycles that it can handle over its lifetime.

Quantitative vs. qualitative (ALE) 

The Annualised Loss Expectancy can be expressed by two ways. Quantitatite means you have the numbers in pounds, you can relate to amount of cost, to put it simple you have the data backing you up when you speak to shareholders.

The qualitative representation is your gut feeling which likely comes from your previous life experience. You just know that that hard drive will not last forever.


Vulnerabilities are kinda favorite topic in security world. You can find them everywhere, and everybody talks about them. What is a vulnerability exactly. Well to put it simple it is a weakness in system. Weakness can be introduced by design itself, by implementation, by not following best practices. To put some meat into discussion, the ShellShock vulnerability in Bash was present almost 20 years in the code before it was released to public.

Offensive security runs a website called Exploit-db which collects list of newly discovered vulnerabilities.

One of the most advanced computer virus Stuxnet had capability to exploit 20 zero days weaknesses. Its mission was to slowly destroy centrifuges in factory. The term zero day refers to a vulnerability that has not been revealed to public.

Well done presentation about Stuxnet

Threat vector

Threat vector is a term that describes the attack surface. For example a web service exposes a different surface than a print server. Web application can be attacked by web based attacks such as SQL Injection, XSS, or vulnerability in daemon. More services – bigger attack vector.

For example a router with lock down SSH and minimal services running has smaller attack surface than an Internet facing web server.


Probability describes how likely would be the vulnerability exploited. As I mentioned a SQL Injection would be much more likely to occur than social engineering at your corporate premises.

Risk-avoidance, transference, acceptance, mitigation

Sometimes introduction to new services could bring so high risk that company can decided to not implement the service. This is typical for new software releases, often companies wait a months after initial release just to avoid bugs and vulnerabilities in new code.

Other times, companies may accept risk associated with services. For example BYOD or Bring Your Own Device may open a new attack vectors for company, but the value of the service outstands this risk.

Mitigation refers how we reduce risks. Following the best practices, regularly patching and revising system configuration, performing vulnerability scanning. All these activities help reduce the risk of being exploited.

Risks associated with Cloud Computing and Virtualization 

With new trends come new risks. Cloud computing can provide a number of great benefits but it is important to understand the risks as well. For example what if one customer of a multitenant cloud gets compromised, how well did the cloud provider isolated the contaminated environment so other customers are safe?

What if attacker finds a way to crack the hypervisor and gain access to all virtual machines running on top of it?


The Recovery Time Objective describes how long it will take to restore a failed system back online. If your e-commerce generates a ton of money you obviously want to have it up and running in no time.

Ma’am restoring these backups will take ages.


Recovery Point Objective is usually related to storage. How often you do full backup for example every night? In such case you can only recover up to that point and you lost data that were written during day. In practices you usually backup on daily or hourly basis but you also keep track of transaction that happened during the day so you can restore to most recent point of time. Obviously shorter RPO will cost you more money.

And with that my friends we are closing this section on risk calculation. I hope that you learned something new today. In the next one, we will be exploring risk associated with connecting our infrastructure to third parties.

Ending the password madness

Have you ever been thinking about how effective are your passwords? I mean, on one side of the ring is the approach to use different password on each web site and on the other side of the ring is the approach to use same complex password everywhere.

Not very good choices if you ask me. If you choose first, you end up having way to many passwords to remember, it becomes very cumbersome. If you choose second, with one very strong password, if one of the sites get compromised everything goes along.

This is password management

I went for something in the middle, one complex passwords to protect all random but complex passwords. Ladies and gentlemen, welcome password manager of the day 1Password.

1Password asking for master password

1Password asking for master password

I have been using this product for a long time and I must say, it is one of the best password managers that I have been using so far. It is simple and easy to use. It contains password generator, plugin to most popular web browsers and now seamless integration with your smartphone.

You have multiple ways how to synchronize the password DB with your phone. You can use iCloud, Dropbox or vuala your own WiFi. So your data will never (hopefully) leave your network. The setup is very straightforward.

First, configure client app for synchronization across local network, in Preferences/Sync.

You will need the 12 character code validate your smartphone

You will need the 12 character code to validate your smartphone

The, open you app and follow these steps to initiate and verify synchronization process.

1Password Sync on iPhone

1Password Sync on iPhone

If you encounter the following error during synchronization:

1Password Sync ErrorTry to restart your firewall and or stealth mode and make sure 1Password is allowed to receive connections.

Mac OS X FirewallI find the password generation feature so handy that I don’t even care what passwords get generated unless they are long and complex and stay protected.

Free F5 Practice Exam: Application Delivery Fundamentals

I just happened that while I was reading whats new on F5, I discovered that they are re-structing their certification program, and as part of it they are offering a free practice exam that is one of two exams needed to achieve F5 Certified BIG-IP Administrator (F5-CA) certification.

All candidates eligible for the 101 exam have, for the first time ever, the chance to take a practice exam for 101-Application Delivery Fundamentals. This free exam, offered until November 23, 2014 at all Pearson VUE test centers, allows a limited number of candidates to preview questions and offer feedback on the questions. This feedback not only helps us improve the individual exams, but the entire Certified program. 


Exam description

Identify candidates who possess knowledge and understanding of the concepts and technology standards that are applicable to application delivery engineers and iRules developers working with F5 products, e.g., BIG-IP Local Traffic Manager (LTM), BIG-IP Global Traffic Manager (GTM), BIG-IP Application Security Manager (ASM), BIG-IP Access Policy Manager (APM), BIG-IP WebAccelerator.

The exam number is  Exam 101 – Application Delivery Fundamentals and according its blueprint, it covers the following areas:

  1. Section: OSI
  2. Section: F5 Solution and Technology
  3. Section: Load balancing Fundamentals
  4. Security
  5. Application Delivery Platforms

What you need to do to be eligible to take this free practice exam at VUE is following:

  1. Register at F5’s certification tracking system you will receive an F5 ID.
  2. After login you will be presented with nondisclosure agreement, and a Certification Program agreement. Accept both.
  3. After you will be eligible F5 sends details to VUE which create an account for you. It usually takes couple of hours, you will receive mail with you new login. This is DIFFERENT from normal VUE account which you may used to sign to there exams.
  4. Only first 101 candidates are eligible to take free exam, so hurry up.
  5. You will have a change to register from 20th of October, which is by the time of writing this article, tomorrow morning.

Good luck to candidates and let the force be with us all. Hopefully I will catch my spot and be one of the lucky testers.

Breaking the Status Que with VMware NSX

Before I am going to present you a cutting edge technology called VMware NSX, I would like to step back and give you some broader perspective why is this product so special.

Not long time ago, before server virtualization we use to have a model where every application lived on separate physical machine. We would have separate server for email services, separate server for file services, separate server for web services, we would have a lot of independent machines. You get the point.

If one server would go down, we have service outage and we would need to rebuild server and restore files from backups. It was time consuming process and it required more labor. When we would need to deploy a new application we would need to wait weeks just for hardware. Clearly there were areas to improve.

Few years ago, server virtualization was introduced and it brought huge benefits derived from hardware abstraction. For the first time It decoupled hardware from software. We could run many virtual machines on single physical hardware. And this possibility was accomplished by using software called hypervisor.

Hypervisor is a small piece of software that runs on server and his ultimate role to abstract computing resources. Operating system thinks it speaks directly to hardware but in fact it is really speaking to hypervisor.

Before and After Compute Virtualization

Before and After Compute Virtualization

But this was just the beginning, we could now take many servers and a create cluster. To application or OS this cluster would just look like a giant hardware resource pool which provides CPU, RAM and Storage for consumption. We could start to do things like dynamic resource scheduling, rapid VM provisioning, we would have programmatic way to provision the resources. For the business, it would mean that we can decrease time to deploy new services from days or weeks to minutes. Show me one CEO who would not fall for that.

Virtualization, my friends, change the computing landscape forever. And I am so pleased to share with you that it is happening once again, this time network is the one that will be transformed.

The fundamental idea of network virtualization is to bring network abstraction, to decouple physical infrastructure from applications that run on top of it. Do not be confused, you still need physical switches, but the way how the overall infrastructure is leveraged will be different.

In this world, the underlying physical network provides simple IP transport services, similar as servers provide physical resources to hypervisor. On top of this layer, a network hypervisor manages the use of these physical resources and programmatically presents them to applications for consumption.

Comparing compute to network virtualization

Comparing compute to network virtualization

It would not be feasible to control each hypervisor independently, therefore we need a component that will program this abstraction layer centrally. And this component is called controller. We moved from distributed model, where every device thinks for itself to central based control. Think of this component as the brain of the network, the master mind.

We can interact with this master mind in multiple ways. It provides an Application Programming Interface. There are two types North-bound API and South-Bound API. The first one is used for application calls such as create a logical network, create a logical switch, change firewall rules. The second is use when controller need to command network components such as virtual switches in hypervisor.

Network Controller it the Brain from Pinky  & Brain

Network Controller it the Brain from Pinky & Brain

See, in this way, controller can program any arbitrary topology. The magic is than executed at hypervisor which is running on every server. You can build multi-tier networks with API calls, instead of going to each device and type complex CLI commands.

You must the thinking, you can program the hypervisor vSwitch but what about the undelaying transport infrastructure? How will my Cat 6k5 know that I am create a new virtual network? And the thing is it wont.

Hypervisors will tunnel traffic to create overlay between each other so the transport network will be spared of complexity of our new virtual network. Underlay would just route packets from one hypervisor to another.

VXLAN tunnels create an overlay network

VXLAN tunnels create an overlay network

VMware NSX is a product that can help you realize a vision of software defined network. It is a network virtualization platform and it very extensible. In a nutshell it can help you create complex virtual networks in software through API calls.

You can create simple logical switches that span across your entire data center, virtual distributed routers that can route packet right at the hypervisor level, distributed virtual firewall and many others. You can “literarily” encapsulate entire Data Center infrastructure and move it around the globe, without ever needing to change your current application itself.

Example of 3 tier application using virtual network

Example of 3 tier application using a virtual network

VMware is not alone in this big game, it has partners that can bring additional services and capabilities, such as deep application level inspection, vulnerability assessment or other higher level services.

I will draw the line in the sand here and leave the possibilities to your imagination.

You can expect more posts to come covering this platform in near future, meanwhile enjoy this video about NSX.

Security+ Series Part 6: Compliance and Operational Security

Welcome back to part 6 of the series. In this one we are going explore compliance and operational aspects of security. You will learn what control types we have, what is false positive, and what kind of policies are used in real life. Take a deep breath we starting at 3..2..1.

Control types 

Control types define how we are going to enforce security policy in our company. They are defined in NIST Special Publication 800-53. Generally it can be broken into three categories.

The first one is Technical, and it can describe for example how we are going to filter web content traffic or how are going to enforce that only authenticated users will connect to wireless network.

The second category include Management control types. An excellent example from this category is change management process. For example, It describes how we are going to handle firewall change requests, what approvals are needed, how the change is tracked.

The last category include Operational control types. This category may state what level of security awareness us required from personal, how to respond to incidents and security breaches.

False positives

This term is used to describe when Intrusion Prevention System fires an alert on traffic that was not harmful. This is undesirable because the IPS effective killed our production traffic, therefore when deploying IPS in productions it is good idea to use some time to fine tune the inspection engine.

False negatives

This term is also used in IPS realm, it describes an event when IPS did not caught the malicious traffic and an attack took place. Again, this event is undesirable and it may when attacker pulls out a 0-day (unknown) exploit to take advantage of unpatched vulnerability.

Importance of policies in reducing risk

Company’s policies have a major role in reducing overall risk. They can specify what actions are allowed and disallowed within company and how to react in certain situation e.g. fire, floods. These information should be shared with each employee.This not only limited to IT system usage but also to general work environment. Some examples of policies include: Acceptable Encryption Policy, Acceptable Use Policy, Clean Desk Policy, Email Policy, Password Protection Policy and many more.

Privacy policy 

Privacy policy is a document that describes how to handle sensitive information, for example credit card numbers, social security numbers, basically all internal and confidential documents or any other form of intellectual property.

It can cost you some bucks if you hand out company’s secrets

Acceptable use

Acceptable use policy may be part of security policy or a standalone document. As the name implies its purpose is to define how IT services can be leverage and how to handle corporate resources and information.

Security policy

Security policy is a another written document that defines rules that must be followed within an organization. It may describe what behavior is allowed or prohibited. For example it may defined what site categories is employee allowed to visit on the Internet. An example of this and other types of policy documents can be found at SANS. They may be used as your starting points when defining security policy for your own organization.

Mandatory vacations 

Often found in many companies, mandatory vacations mean that employees are required to some days off to avoid becoming crazy and clear their heads from work, or to reveal a fraud. Mandatory vacation can be requested by your manager or boss.

Do you need anybody force to do this? Seriously?

Job rotation 

Job rotation is common practice where people from different teams such as engineering and operational swap their roles for certain period of time. This is useful to get a broader picture how each team works and should help increase the level of cooperation between people.

Even farmers know what job rotation means

Separation of duties

With great power comes great responsibility as uncle Ben would say. Separation of duties in IT means that tasks are divided between many people. One group may handle change supervising, next group handles change implementation, other group is in charge of change approval and review.

The main point here that no single person has all roles. It is always required that more eyes look at the change before it gets implemented. This approach reduces risk.

This speaks for itself

Least privilege 

In our company we may have multiple teams that handle different parts of IT delivery. We may have service desk which essentially answers to service requires. We may have guys at Network Operation Center who monitor network health, and we also many have hardcore admins doing heavy duty troubleshooting.

All these roles have different privilege requirements. For example Level 1 NOC may have only read-only access for basic checking and the L3 guys may have root access. This approach also increases overall security and it often required to comply with security audits.

Call center folks do not get system level privileges but are quite happy without them

And with that sentence this article came to its ending. I hope you learned something new today and see you in next one which will be spinning around risk calculation.