azure, data science, machine learning in production

Building a Data Science Environment in Azure: Part 1

For the last few months, I have been looking into how to create a Data Science environment within Azure. There are multiple ways to approach this and it depends on the size and needs of your team. This is just one way in a space where there are many others (e.g using Databricks).

Over the next few months, I will be running a few posts about how to get this kind of environment up and running.

First off, let’s mention some reasons why you might be looking to set up a Data Science sandbox in Azure rather than on premise.

Reasons why:

  • On-prem machines too slow.
  • Inappropriate (or no) tooling on-prem (and not fast enough deployment of relevant tools to local machines).
  • Slow IT process to request increased compute.
  • On-prem machines are under utilised.
  • Different needs per user in the team. One person may be running some heavy calculations, whilst someone else just runs some small weekly reports.
  • Lack of collaboration within company (perhaps cross department or even regional).
  • No clear process for getting models into production.

Once you have clarified the why, you can start to shape the high level requirements of your environment. Key requirements of our Data Science Sandbox could be:

  • Flexibility – Enable both IT to have control but also the data scientists to have choice.
  • Freedom – Enable data scientists by giving them the freedom to work with the tools they feel most confident.
  • Collaboration – Encourage collaborating, sharing of methods and also the ability to re-use and improve models across a business.

You will want to think about who your users are, what tools they are currently using and also what they want to use going forward.

At this point, you might do a little scribble on a piece of paper to define what this might look like in principle. Here is my very simple overview of what we are going to be building over the next few posts. I’ve taken inspiration from a number of Microsoft’s own process diagrams.

Let’s take a look in more detail at the above process.

  1. We have our Data Science sandbox, which is where the model build takes place. The Lab has access to production data but may also need to make API calls or users may want to access their own personal files located in blob etc. This component is composed of a number of labs (via Dev Test Labs). These labs could be split by team/subject area etc.
  2. Once we have a model we would like to move to production, the model is version controlled, containerised and deployed via Kubernetes. This falls under the Data Ops activity.
  3. The model is served in a production environment and we take the inputs and then monitor the performance of our model. For now, I have this as ML Service but you could also use ML Flow or KubeFlow.
  4. This feeds back into the model, which can be retrained if necessary and the process starts again.

The main technology components proposed are:

  • Dev Test Labs
  • Docker
  • Kubernetes
  • ML Service

In the next post, we will start setting up our Data Science environment. We will start by looking at setting up Dev Test Labs in Azure.

data science, python

Text mining: NLTK suite for Python

Today we are going to take a quick look at the NLTK suite for Python.

We could use NLTK for situations where we need to handle human language.

Things like:

  • Customer complaints classification
  • Sentiment analysis
  • Chatbot development
  • Insurance claim description analysis
  • Scanning candidate cvs

In this post, we will start with a large chunk of text (taken from the NLTK Wikipedia page) and then clean it, split it into substrings and then plot the frequency of each word.

First off, we need to import the relevant libraries and packages that we will be using:

#import relevant libraries and packages

import re
import nltk
nltk.download('punkt')
from nltk.corpus import stopwords
nltk.download('stopwords')
from nltk import FreqDist
from nltk.tokenize import RegexpTokenizer

Next we need to create a new object, which is our text from Wikipedia.

#Text below is taken from the NLTK page on Wikipedia. 

my_text = """The Natural Language Toolkit, or more commonly NLTK,
is a suite of libraries and programs for symbolic and statistical
natural language processing (NLP) for English written in the Python
programming language. It was developed by Steven Bird and Edward Loper
in the Department of Computer and Information Science at the University
of Pennsylvania. NLTK includes graphical demonstrations and sample data.
It is accompanied by a book that explains the underlying concepts behind
the language processing tasks supported by the toolkit,plus a cookbook."""

I have assigned it to the variable name my_text.

Next we want to replace any newline notation with a space, so it won’t show \n. For this we use the re module, which enable us to use regular expressions. I have also made everything lower as otherwise it will have ‘Language’ and ‘language’ as two different words.

#substitute \n newline within 'my_text' with a space and assign this to the 'document' object
doc = re.sub('\n', ' ', my_text)
document = doc.lower()

We can use the nltk tokenizer to divide the text up into individual words or even sentences. For example, if I wanted to divide it into sentences I could do:

nltk.sent_tokenize(document)
print(document)

returns:

We are going to divide the string into individual words so that we can plot the frequency of each word. First we want to remove stop words, otherwise our most common word is going to be something like ‘of’, which isn’t so helpful.

NLTK already has a dictionary of stop words that we can use.

stop_words = stopwords.words('english')

In this next step, we are going to write a function that returns only those words that are not in the stop words variable. First, we need to use the tokenizer to divide our string into individual words. We are also going to remove any punctuation, marks otherwise these will also be classed as words.

tokenizer = RegexpTokenizer(r'\w+')
my_words = tokenizer.tokenize(document)

We are then going to create an empty list and call it my_words_ns. Then we have a function that loops through each word from my_words and, for each one that is not found in the stop words, appends it to my_words_ns.

my_words_ns = []

for word in my_words:
    if word not in stop_words:
        my_words_ns.append(word)

NLTK has it’s own frequency distribution function, which we can then use to plot the frequency of each word. Let’s apply it to our list of words.

freqDist = FreqDist(my_words_ns)

You can get the frequency of a specific word like this:

print(freqDist["language"])

Now let’s plot the top ten words:

my_plot = freqDist.plot(10)

And there you have it. This can then be built out in order to inform certain business processes. Next time, we will use things like genism and Microsoft Cognitive Services to explore what we can achieve by harnessing the power of machine learning.