Analyzing credit card transactions using machine learning techniques

Introduction

In this 3-part series we’ll explore how three machine learning algorithms can help a hypothetical financial analyst explore a real data set of credit card transactions to quickly and easily infer relationships, anomalies and extract useful data.

Data Set

The data set we’ll use in this hypothetical scenario is a real data set released from the UK government, specifically the London Borough of Barnet. The full dataset can be downloaded from here:

https://www.europeandataportal.eu/data/en/dataset/corporate-credit-card-transactions-2015-16

In a nutshell, the dataset contains raw transactions (one per row) which contain the following information:

  • Transaction ID and Date
  • Type of account making the payment
  • Type of service paid for
  • Total amount paid

Algorithms & Workflow

For the purposes of this article, we will process the data set using an excellent python toolset from the University of Ljubljana, called Orange. For more details please have a look at their site: https://orange.biolab.si/. I won’t delve into Orange’s details in this post, it’s enough to display the workflow used to obtain our results:

financial anomalies workflow
Orange Data Pipeline and Workflow

Note how we first load our data from the CSV file, and do one of two things:

  • Discretize the data. Correspondence analysis only accepts categorical data, so we split the “total” column – which is numeric – into categories, for example:
    • <10
    • 11-21
    • 22-31
    • >200
  • Continuize the data. Conversely, PCA and clustering accept only numerical values, so we use this function to perform “one-hot encoding” on categorical data, and we also normalize our data. We then feed this data into PCA and view the results there, before feeding the results into our clustering algorithm to make it easier to spot anomalies.

Correspondence Analysis

We first explore “correspondence analysis“, which is an unsupervised method akin to a “PCA for categorical data”. It is extremely useful in exploring a data set and quickly discovering relationships in your data set. For example, in our financial data set we get the following result:

correspondance analysis

  • The red dots are the “type of service paid for”
  • The blue dots are the “type of account”
  • The green dots are the amount of money, or “total”

Overall results, most observations are clustered in the bottom left of the graph, however a few observations stand out.

On the top right corner of the graph, we see three points which are set apart:

CA_1

This suggests that “Fees and Charges” is closely related to the “Customer Support Group”, as is “Legal and Court Fees”. This is in deed the case, since:

  •  all “fees and charges” were made by “customer support group”

CA_1a

  • all “legal and court fees” were made either by “Customer Support Group” or “Children’s Family Services”. It is interesting to note that “Legal and Court Fees” is pulled closer to the other points since it is connected to “Children’s Family Services” which in turn is responsible for many different transactions within the data set.

CA_1b

Towards the bottom of the graph we see the following cluster:

CA_2

So we expect to find some sort of “special” relationship between these observations. In fact, upon verification we find that:

  • All “operating leases – transport” transactions were made by “Streetscene”:

CA_2a

  • 95% of all “Vehicle Running costs” were made by “Streetscene”
  • Over 53% of all “Streetscene” transactions were of type “operating leases – transport” and “vehicle running costs” – explaining why the blue “Streetscene” observation is “pulled” towards these two red dots.

Towards the middle of the graph we see two particular observations grouped together:

CA_3

Upon investigation we find that the above clustering is justified since:

  • All “grant payments” were above 217.99:

CA_3a

As can be seen, Correspondence Analysis allows us to quickly and easily make educated conclusions about our data. In this particular case, it allows our hypothetical financial analyst to spot relationships between accounts, the type of services they use, and the amount of money they spend. In turn, this allows the financial auditor to determine of any of these relationships are “fishy” any warrant further investigations.

Principal Component Analysis

Our next article exploring this data set using PCA!

Hierarchical Clustering

Watch this space for a link to our next article exploring this data set using Hierarchical Clustering!

 

Advertisements

Nugget Post: Reactive Functions to parse nested objects

Note this article assumes familiarity with the Observer Pattern / Reactive Programming as described here: http://reactivex.io/

Some APIs return complex nested JSON objects. For example, take this cleaned up sample response from ElasticSearch (which incidentally is used to build the “Data Table” visualization):

Note the structure of the object. Within the top level “aggregations” object we see a recursive nested structure; each nested object has a “buckets” object, which contains an array of objects, and each object also contains a “key”. The question now is, how do we efficiently traverse the above object to extract each “key” value while retaining the parent’s object “key” as well? To further illustrate, taking a subset of the example above:

rxBlog1.png

 

It was actually easier for me to reason about the above using imperative style programming, which would look something like this:

However, the idea is to use ReactiveX programming to traverse the tree in order to make the code more concise. At each key, the program should “pass down” the key to it’s child observables right down to the final child, which would then emit the result to a subscriber. This is what we end up with (in RxPY):

 

Let’s step through the code:

  • Lines 3-4: If you notice, each object (which I call “aggregation” in the code) contains an object called “buckets” which is an array. We can create observables from arrays, so this function simply grabs the “buckets” array of an arbitrary aggregation and returns an observable
  • Line 6-7: First time we call the getAggregation function to return an observable. Now we have an observable emitting the outer objects. We need access to the next inner object, which itself contains another “buckets” array that we can turn into a “child observable”. Therefore each object (which I call “transaction” in the code) is passed into the getAggregation function once again. However, we would like to pass on the parent’s “key” value to every emission from these child observables. That is the role of the map function which pre-pends the key to the actual emission.
  • At this point we have an observable of observables – which we need to flatten in order to pass it to subsequent stages – that’s the role of the flat_map stage.
  • Lines 8-9 are repetitions of the same pattern described above – note how at each stage we add the key to the emission of the child observable and flatten the observables into a single stream for the next stage
  • Line 10: we call our final “map” to transform the results into tuples as shown in our diagram above
  • Line 11: generic subscriber function

It’s a good exercise to:

  • really understand the difference between flat_map and map
  • understand how to pass variables to child observables via the use of a “map nested in flat_map” pattern
  • did it really make the code more concise? Do you still find it easier to reason in terms of imperative?