Bleeding edge: The intersection of Bitcoin and cyber-security

The good, the bad, & the ugly…

There are some very obvious connections between bitcoin and cybersecurity; almost every hacker who blackmails their victims via ransomware or whatever other hack demand payment in bitcoin. This is the ugly side of bitcoin and cybersec; by it’s very nature bitcoin is pseudo-anonymous (read: difficult to trace), decentralized (read: difficult to take down) and increasingly easy to use. No wonder hackers love bitcoin.

But what are the other facets to bitcoin melding with cybersec?

The bad…

… it can be used to control botnets

The bitcoin blockchain is intended to be extremely difficult to take down, to be private and unregulated. Sounds like the perfect medium for a Command and Control [C&C] service. Enter ZombieCoin 2.0:

The authors of this paper successfully manage to design

[…] ZombieCoin bots which we then deploy and successfully control over the Bitcoin network.

They do this by embedding simple botnet commands into the bitcoin transaction field OP_RETURN which is normally used for transaction identifiers similar to what you’d have in your online ebanking portal. This field allows you to include up to 80 bytes of data which the authors use to control their bots. The resulting bot is only 7MB in size and stores only about 626kB worth of blockchain. with the traffic generated by this C&C method being indistinguishable from normal bitcoin traffic.

Time to start blocking bitcoin traffic on your enterprise network

The good…

… it can be make Man In The Middle Attacks a thing of the past

Most MiTM attacks rely on being able to change data that is supplied to a client – for example changing DNS entries or HTTPS certificates. Current DNS / SSL / TLS protocols struggle to make this data tamper proof. DNSSEC hasn’t really taken off and SSL/TLS rely on a central authority that can be compromised or abused.

However… if attackers can embed data in the blockchain, so can developers and defenders. Inheriting all the benefits of the blockchain, this embedded data will not only be resilient and de-centralized (like you’d hope DNS is…)  but also backed by cryptography to result in tamper-proof data. Any entries into this system would have to be validated and agreed upon by at least 51% of the network to be accepted…

So what id we use blockchain instead of DNS and HTTPS certificate authorities? Entities would use the blockchain to resolve their IP addresses and provide their public certificates in a safe, secure manner. This is the basic concept behind a blockchain-based technology aptly named NAMECOIN:

While it may look rather theoretical or at least difficult to migrate our systems to use blockchain, it turns out that there already is some excellent work being done by Greg Slepak to simplify this and make namecoin extremely easy to use for both webmasters and websurfers, in the form of okTurles:

For a more in-depth read abot okTurles, have a look at their overview:




From JQuery to ReactJS

I have previously worked with ReactJS – most notably during my Master’s dissertation, however the main Javascript library I work with when working for clients and companies still remains the venerable JQuery. This is changing as more and more organizations I interact with move to more modern frameworks like Angular and ReactJS.

Where to start if you’re looking to upgrade your skills, or migrate a product from JQuery to ReactJS? Any tips? Here’s what we learned so far:

Be ready to change your mindset – but never give up

It’s not so much a learning curve as it is a roller coaster. Blogger Ben Nadel has an excellent drawing that I guarantee will also be your experience learning any JS framework; be it Angular or React:

Pretty much…

What makes ReactJS so powerful compared to JQuery is the fact that it’s fundamentally different. Sometimes your own prior experience with JQuery will hamper you in learning React because of pre-conceived notions on how things should be done. Power through it – through all the ups and downs, the curve still has an upwards trend. Once you get to grips with the whole idea of unidirectional data flow,  components, props and states, React makes you a lot more productive. That said, there are a few things that helped me transition easier…

Create-react-app is a must

No doubt about it, the worst part of being a newbie is getting used to the new tools, environments, moving parts and components of your new pipeline. Create-react-app is your savior here. Not only does it setup your dev environment in a few simple commands, you can rest assured that it’s doing so according to industry best practices (which eases those nagging questions at the back of your brain: “Am I doing this right?”). One very useful article I found was how to configure create-react-app to proxy through API requests directly to my back-end:

How to get “create-react-app” to work with your API

Unleash the power of reusable components

It takes banging your head with few project requirements to realize just how powerful and useful React Components are. In reality, the whole point of using React is to define a component once:

class David extends React.Component {
      render() {
                   return (<h1>David says hi...</h1>);

… and then re-using that component over and over again as a “normal” HTML attribute

 <David />

 Thing is, when clever people start publishing their own components, you can re-use these components and requirements that took days before now take you minutes. These components could be anything, my favorite so far:

Applying Google’s MDL to React JS becomes a breeze…

Redux-Saga vs Redux-Thunk

Talking about Dexie.js brings me to my next point. One of the aspects of Redux that always caught me with my pants down was the inability of reducers to process async functions (it seems logical in hindsight – an async function returns a handle to a future result such as a Promise – not the actual result). This is something you need to keep in mind since some very common requirements like fetching data from an API or indeed using libraries like Dexie require you to use async functions.

In order to tackle this, redux-thunk was born. A nice and easy blog post on how to use it can be found here:

Coming from a JQuery background, I was already used to the idea that events are used to trigger certain actions – just like in redux actions are used to trigger reducers that in turn change your state. The redux-thunk approach of defining an async function within an action ran a bit contrary to my thinking – an “action creator” as the blog above refers to it. It would be easier (to my JQuery mind) to have a function that “listens” for an async action, performs that async action, and then returns another action with the result for your dispatch to process. This thought process resonated more with me (again, coming from an event / subscription model). In other words:

Redux-Saga Workflow

As you can see, this is where Redux-Saga enters the picture. This redux middleware listens for specified actions (usually you’d specify those that need to trigger async actions like API operations or Dexie.js), runs a function to deal with the async actions (tip: definitely try stick to using the yield keyword..) and then dispatches another action with the results that your usual redux reducers can then pick up. I guess it comes down to personal preference, but this paradigm was a lot easier to deal with in my head

Always try code a simple / sample project while learning

In my case most of my learning was done through reading documentation and via code-academy. My side / sample project was to build a web app that tracks my cryptocurrency investments, incorporating features like indexeddb via Dexie.js, API calls via Saga and so on…

(Note sparklines only work if you visit the page at least twice from the same browser – the app has no server side components, it runs entirely in the browser so the sparklines represent change in crytocurrency price since your last visit)