Agile & DevOps Practices to Accelerate AI & Machine Learning Outcomes
As enterprises start to adopt Agile Methodologies and embed these practices across their organization seeking scale, most firms will encounter headwinds on some of the newest areas of technology development and innovation. In this blog, I’d like to talk about why AI and ML centered development projects will inevitably stress your current Agile practices, and stretch existing DevOps tooling capabilities but they don’t need to ‘break the bank’ when it comes to progressing your Scaled Agile initiatives. By leveraging existing SDLC capabilities utilized in many enterprises, Agile and DevOps practices to accelerate AI can be successfully adopted and matured through existing toolchains to support rapid delivery of acceptable quality.
If you are often finding yourself reading articles about Agile and the challenges of integrating a particular methodology in an existing organization, then you are no stranger to transformation anxiety. The good news about this kind of anxiety is that its illogical…now comes the bad news, it’s a behavioral problem to rewire and reshape how the organization has come to operate for years-or-decades. While there’s no magic wand to address the latter, there is one to address the former – and it’s about building a logical view of how delivery occurs so that Agile methodology and DevOps tooling can be applied. In this blog we’ll specifically look at AI and ML delivery, and look to demystify an area of rapid innovation, significant hype & marketing, and start seeing your team’s inherent SDLC pattern for what it really is.
What’s AI and Machine Learning and why do I care?
Simply put, Artificial Intelligence (AI) is any device/application that can perceive its environment and take actions/make decisions that aim to maximize its goals. So what’s a familiar example of that you can understand today? How about your GPS? It may have been developed to not only bring you to your desired destination, but it may also use AI learning techniques by analyzing your historical path choices and other realtime traffic data to make ‘best fit’ choices for your travel path. Even smarter techniques can be iteratively applied to take other variables into consideration such as historical traffic, construction, accidents, and fuel economy based on your vehicle.
So now what’s Machine Learning? Considered a subset of artificial intelligence, ML is a study of algorithms and statistical models that computer systems use to perform a task without using explicit instructions. For supervised machine learning approaches, they do this by building a mathematical model based on sample or training data in order to make predictions or decisions. Unsupervised learning gets more complex into neural networks, coincidence, and covariance…but I think you get the point!
So, what’s a recognizable example of Machine Learning? Facial recognition is a commonly used example that exhibits applied Machine Learning towards a goal of identifying an individual, a set of individuals, a group of particular attributes, or whatever your objective may be. Instead of creating massive case statements around recognizable attributes, the data scientist trains the system using statistically large sets of input/output data.
Machine Learning is a fascinating topic with many different approaches…and for others it’s just too much math. But now that we have the basic definitions done, it’s time to forget the complexity of ‘how’ this works, and focus on what Agile software development looks like in a team using AI/ML.
Elements of a Data Driven Application
1. Data Management
Given the basic definitions of AI/ML, we know that data management is a core and essential component to any AI/ML project or program of work. If your team isn’t obsessing about data management with the same rigor as application code management, then you may want to take a step back and understand what usage your datasets will have over time across your project and future feature re-use.
The aspects of data management that inherently make AI/ML projects more complex, although not unique, typically are:
- Multi-source Datasets are often used. It’s very typical to see advanced AI/ML projects incorporate multi-source data, requiring data governance practices to be established (at least) around the data sources contributing to the work
- Versioned datasets are necessary. Expect that you will need to version your datasets so you can tie them to specific training, testing, and exploratory analysis
- Database Change Management practices are often employed. Given that model outcomes will be highly sensitive to data inputs (and declared outputs if using supervised techniques), it’s imperative to establish good practices around Database Change Management, Data Snapshots, Data Cloning, and Data Hydration.
- Lastly, because AI/ML can often be non-deterministic, summary data needs to be captured for all model testing & training that relates versioned data sets utilized, versioned algorithms/models, data prep attributes, and outcome data.
While all of the above-mentioned considerations may not be necessary for your AI/ML project, it’s important for the team to decide which practices are crucial to producing iterative releases while maintaining quality and team efficiency. Its also worth understanding that as your enterprise builds more features, there will be entanglement and dependencies across AI/ML projects over their lifecycle.
If some of these data management practices seem more complex than a typical application, it’s important to recognize that they are not atypical to IT patterns that already exist in Data Warehouse Management and Scenario-based trading algorithms, historical backtesting, or portfolio risk-driven calculations. Chances are you already have some tools and/or processes in place to help you in your enterprise or at minimum the expertise on how to approach these activities, applying version management to data sets functions that will be used and re-used across your organization. Mature Data Governance practices in your enterprise will also contribute to overall improvements across data management practices and tooling. Agile and DevOps practices to accelerate AI projects should be part of discussions in your Centers of Excellence (CoE’s) ensuring that specific capabilities that are required to mature have enterprise-level visibility.
2. Data Preparation
If you are familiar with the term ‘garbage in / garbage out’ then you are halfway towards being a Data Scientist. All joking aside, given we are talking about AI/ML, we can assume a data-centric mindset and principles within a given development team. This will include dealing with significant amounts of data that may be structured, semi-structured, and/or unstructured. Teams involved in AI/ML work are often continuously planning or executing Data Prep activities as part of the SDLC such as model training runs, model testing runs, and/or deployed applications that need to be packaged with initial state data sets.
So what types of activities will you see as Data Preparation that need to be managed with an Agile Mindset?
- Data Mapping may be continuously used, referenced and/or studied to create relationships across data sets.
- Feature Engineering may be utilized, including the addition of metadata, properties, coefficients, labels and/or derived datasets to complement existing data sets
- Data Normalization may be employed to facilitate better system use of the data sets
- Data Quality techniques may be used in the form of de-duping, data cleansing, and metadata tagging (as it pertains to quality)
If you are familiar with data management, then all of these practices will be recognizable as disciplines of data management. When it comes to AI/ML development lifecycles, generally more stringent practices need to be evolved to ensure data prep activities are aligned with testing, training and ultimately releases.
3. The Algorithm and Model
One might ask “Isn’t the computer supposed to build the model?” The practical answer to this is that there is a significant amount of scientific analysis done by humans that includes hypothesis generation and mutations, model optimization and ‘best-fit’ analysis, hyperparameter / algorithm tuning, model validation and other types of continuous enhanced study. There is a non-trivial amount of effort done by the data scientist as part of data modelling done throughout the SDLC that will impact both the programming involved in the algorithm, how data is managed, and how the data is prepared.
There are many in depth resources one could research in this area with respect to Algorithms and Model approaches. From the agile practitioner’s standpoint however the ‘how’ it works under the cover is not important, only what essential activities are done as part of SDLC when it comes to training, testing, and releasing.
4. The Business Application or Presentation Layer
Finally, we get to the goal or outcome of the AI/ML project. Unless this is team is in an academic setting, you will find that there is a value being delivered – whether it be in the form of an application, functional device, or a deliverable set of insightful data. In the standard SDLC world we call this a release and in AI/ML projects it is no different.
The SDLC Lifecycle of AI/ML projects
Now that we’ve broken down the process of an AI/ML project it’s a good time to map them into a typical SDLC process. If you are an Agile SDLC expert, you may have already come to the realization that there are no mysterious gotchas or roadblocks to application of Agile and DevOps practices for AI projects, but let’s continue through the mapping to a typical SDLC flow.
I like Figure 2 because it is simple and shows from the point of view of a data scientist, their process at a high level. As you can see it can be mapped neatly to the SDLC paradigm of Design -> Develop -> Deploy -> Monitor with the well-known Deming Cycle for Continuous Improvement based on Data and Business knowledge acquired throughout iterative work. Its important to note that there are feedback cycles within most of these activities to stress the continuous nature of improvement, rapid prototyping and hypothesis generation/testing associated to Agile ways of working.
Managing Releases and SDLC
Much like any delivery, we expect that an SDLC release in the AI/ML context goes to a consumer audience and is of an expected quality that is inline with stakeholder expectations. Cumulatively, it includes each of the above-mentioned practices. In effect, a release can constitute any of the following and therefore still be done in an agile, iterative manner:
1. A fundamental change in the Data Management may be what constitutes the release. This could include addition or removal of source data, changing to a different versioned set of data, or changing other data elements used in model training.
2. A change in Data Preparation could also be a trigger for release. This could include changes in data mapping, feature engineering, cleansing, or normalization rules. Given that even small changes can affect the model training, a team should be prepared to consider changes in this area significant enough to call a release if it improves the overall outcome.
3. A change in the Algorithm, Model, or Features that are inherently a change you may trigger a release candidate on. This would also include any source code, compiler, hyperparameter, or configuration changes
4. A change in the Business Application or Presentation layer may also trigger a release candidate where the business application source code, business rules, business layer configurations, etc. may be the element of change in an SDLC release.
5. Or lastly, any combination of the above
This is not to say that SDLC management of AI projects is simple, as it requires more advanced Data Management tooling than are commonplace in most enterprises, however there are solutions in the marketplace that will facilitate how you manage your test data, version your data sets, perform DB source change management, clone & hydrate data sources for test environment preparation, how you orchestrate your model training, how you manage & review historical test results, how you package & deploy your AI/ML binaries, and how you manage & version features used by other model processes.
How can Citihub Digital Help?
Citihub Digital have experience of implementing effective and practical Agile and DevOps practices to accelerate delivery across the organization spanning multiple types of applications and distributed platforms.
If you already have a significant portfolio of AI/ML projects in flight or production applications already available, trying to establish new processes and Agile Methodology can be challenging but worth the rewards in quality, efficiency and time to market. Our maturity model assessment on your current Agile and DevOps practices and capabilities will establish and measure coverage of the key elements above and help build a backlog of continuous improvement aimed at improving the end-to-end outcomes.
If you have not yet built a critical mass in AI/ML, but are working through your governance or CTO structure in establishing these capabilities, we can lead your organization through the challenges of implementing the right balance of SDLC control and Agile practices to help structure your engagements and navigate your Technology and Operational risk requirements inline with building a competency in advanced data analytics and AI.
We have experience with some of the leading data governance tools, data management tools, Scaled Agile practices, and popular DevOps tools that will be necessary to establish a competency around Agile delivery of AI/ML-based applications.
And most importantly, we believe that ultimate goal of Agile in the workplace is to enable teams to deliver value iteratively, understanding that the end-to-end process has to be practical and realistic to deliver value in an organization.
Chris Zanelli talks about approaches to structuring enterprise Agile and DevOps around your Artificial Intelligence and Machine Learning projects to boost outcomes and time-to-market.
NYU partnered with Citihub to offer a course on public cloud security technologies
Citihub was recently added as an industry partner to New York University’s (NYU Tandon) Cyber Security program. Exclusive to NYU Cyber...
Ian Tivey & Jim Oulton Named Technical Directors
Ian Tivey and Jim Oulton have been promoted to Technical Directors, a role reserved for senior leaders in Citihub who provide...
In the press
Using a ‘Three Lines of Defense’ Program to Balance Development Stakeholder Needs
Using the NIST three layers of defence as a framework, Citihub’s Glen Notman outlines how to leverage agile development capabilities and underpin them...
In the press
The Balancing Act
In this podcast, we will go into the details of how the “technical” automation-for-speed perspective is shifting to a “business-centric” perspective...
Life (and work) in the time of Corona
Less than two months after starting his job at Citihub, Senior Consultant Luis Carrazana, together with the rest of New York,...
In the press
Role of Security in a Digital First Enterprise
Join Citihub’s Glen Notman as he injects practical insights on how to enable security practices in a digital enterprise.
In the press
Compliance Challenges in a Lockdown World
The ongoing coronavirus crisis has changed business norms around the world, but as organisations struggle to come to terms with large-scale...
In the press
Institutionalizing DevSecOps in the Large Enterprise
Citihub’s Chris Zanelli, joined by several industry peers, will discuss topics across DevOps & DevSecOps, Enterprise Compliance as Code, Cloud Compliance...
Military Veterans are Welcome at Citihub Digital
This Memorial Day, when the rest of the United States of America will pay tribute to the military personnel who have...