Establishing Development Standards

CoE Lead Journey

Learning Experiences

Establishing Development Standards

Objective: In this learning experience, we’ll talk about how to establish consistent development standards for all teams involved in RPA.

Developing and defining best practices for developers is one of the core responsibilities of the Center of Excellence. These best practices should reduce bot development times, standardize logging and reporting for RPA key performance indicators, and pave the way for scaled development. This is a huge responsibility for a CoE, especially when just getting started – make sure you don’t skip this one!

What exactly does “Establish Development Standards” mean? Let’s dig into how to operationalize these standards within your organization. We’ll look at a couple keys for the CoE to focus on to establish a consistent, auditable, and supportable bot environment. Start doing this when your CoE is centralized, so that when you expand to a federated model, it’s easy to roll out the same practices to all teams.

Bot Shells

What is a Bot Shell?

Most standardization will be driven by your bot shell. You may hear this called any number of things, like bot shell, bot framework, or bot template. Every time a developer builds a new bot, they start with a copy of the bot shell, which already includes your standard analytics, reporting, and other components within it. In this way, common tasks like logging, log management, error handling, and reporting can all be done in a standardized way. As new developers and federated factories are onboarded, the bot shell makes it easier for them to get started. Let’s look at characteristics of an effective bot shell and how this can be implemented in your CoE.

Error Handling

Error handling is an essential element of an effective bot shell. Why? Think about what could happen if bots are developed without error handling. A bot runs on a headless bot runner hosted on a server that no one is actively watching. An error occurs on the bot run and a pop-up comes up indicating the details of the error. The bot runner will wait indefinitely for someone to address the pop-up. Worse yet, it will not log the cause of the error, so when the machine is rebooted, error details are lost. While seasoned developers are familiar with how crucial error handling is, new developers and citizen developers may not be. Supplying new developers with a template that already includes basic error handling will help prevent them from causing issues to other scheduled bot runs and ensure that bot errors can be diagnosed and fixed.

Logging

It’s not exciting or glamorous, but one of the best things you can do for the supportability of your RPA practice is to standardize the way that logging and log management are done. Across ALL bots, logs should be saved to a consistent location and logs should follow a standardized pattern for readability. Log files should be automatically purged at regular intervals to keep their reporting locations. Logging goes hand in hand with error handling. When an error occurs, we want to do a few things every time:

  • Trap it: We don’t want the error causing issues on the bot runner that could impact the schedules for other bot runs
  • Log it: We want to log all the details about the error that we can:
    • What line did the error occur on?
    • What was the error message that occurred?
    • What did the screen look like at the time of the error? (Using the Screen package’s Capture Desktop action)
      • These screen captures can be very helpful in understanding why the bot failed, but be sure they are securely stored or not used on bots where PII data may appear on screen
    • On what date, time, and bot runner did the error occur?
  • Save and close the log: Where to save logs is key. Your bot shell should define a consistent pattern for logging locations for each bot. A common approach is to save the logs in the program data directory (similar to where Enterprise A2019 product logs live) in subfolders named after the bot name and factory (or department) name.
    • Example:
      • Log Files: C:\ProgramData\AutomationAnywhere\Bots\Logs\InvoiceProcessingBot-FinanceFactory\Logs
      • Snapshot Files: C:\ProgramData\AutomationAnywhere\Bots\Logs\InvoiceProcessingBot-FinanceFactory\Snapshots

In addition to logging errors, it’s also important to set a standard for how audit logging can be used. Many bot developers use audit logging to track the progress of their bot runs. For example, in human language, “Log the fact that I made it into this loop and how many items were found.” Audit logging is tougher to enforce using a bot shell since it’s dependent on the specific bot, but be sure that your bot shell at least establishes a pattern for creating a file for and logging to an audit log for each bot execution.

In practice, this often means having the bot shell create a new log for this bot for each date. Any bot runs that occur on that date would continue to append to that same file. Each bot stores its log files in a bot-specific folder to ensure there is never any confusion or overwriting of one bot’s logs with another. This formatting becomes especially important as we consider log management.

Log Management

Many people think to include logging within their bot shells, but very few think to also include automatic log management. What is automatic log management? It means that my bot shell should automatically be cleaning up older log files to be sure that the drive that is storing my logging details is not becoming infinitely filled with bot logs. Exactly how you do this is dependent on logging standards in your organization. Consider including log management in your bot shell that looks for any log files older than X days old (typically 30, 90, or 180 days) and deletes them. This log management capability can get executed on every bot run before the bot starts executing its own logic. At the beginning of the bot run, also check that the log files and folders that the bot expects to write to have been created.

Reporting

Reporting is covered in depth in the Defining Success learning experience. For now, suffice it to say that the considerations for reporting are similar to those for logging.

Your bot shell should automatically log the results of the bot’s execution to your centralized reporting repository (Bot Insight or otherwise) with standardized details about the bot’s execution. This might include things like the start and end times, the bot name, the factory or department the bot belongs to, and the bot runner machine.

Miscellaneous Features

Other functionality may also be helpful to include in your bot shell. This might include things like opening up a support ticket if the bot fails, establishing a pattern for the use of Bot Insight, sending an email to the factory distribution list on bot completion, sending an SMS to the support team, or ensuring that network drives have been appropriately mapped on the workstation. There’s really no end to what you could include in this shell.

One word of caution, though: Don’t make the bot shell so burdensome that one needs a degree in computer science to figure out how to use it. The idea with a bot shell is that the burden to use it is light and the value that it provides is clearly explained. The biggest benefit of using a bot shell is that it accelerates development by enabling developers to get started right away on adding logic specific to their automation opportunity as opposed to having to deal with commonly performed functions like logging and error handling, so be sure that any additional features added to the bot shell don’t take away from its ease of adoption.

Documentation

Documentation is key to a successful federated RPA practice, and as a CoE you must enforce consistent documentation. Without consistent documentation, it will be impossible for your enterprise cybersecurity, audit, support, or other bot development teams to know what a bot is doing, what applications it interacts with, and what reusable components it might leverage.

Think of documentation as a living, breathing record. Update the documentation when you update the bot. In this way, you won’t run into issues with people referencing an old version of the documentation and getting outdated information. You also reduce the amount of documentation developers need to do for each release. As a starting point, we recommend creating a documentation template (or using the Private Bot Store) and including the following for all bots:

Bot Purpose

Documentation should start with the business problem the bot is designed to solve. In this way, readers can understand why the bot exists and understand the business value that it provides. Other factories and bot builders can also reference this document to understand the original use case and the bot’s solution.

Flow Chart

This doesn’t have to be as detailed as the flow view of the bot itself in Enterprise A2019, but should generally show what the bot does, and most importantly how it interfaces with other applications and services. If the bot gets its list of tasks from a workflow application before it updates another internal web app, for example, those are the kinds of connections that should be clearly documented.

Accounts Used

Should the original creator of this bot not be available, what information would another developer need to be able to troubleshoot and update the bot? A key component to that is knowing what accounts (service and application) the bot is using, what credential vaults or lockers those values come from, and what group to get in contact with to resolve issues related to bot authentication in those various apps.

Where Documentation Is Stored

The CoE needs to set guidelines for where documentation is stored. Options include:

  • Private Bot Store: This is the most RPA-specific solution, with custom documentation fields for automations. You can create custom filters to organize bots by department, factory, and other characteristics.
  • A centralized SharePoint with sub repositories for each factory.
  • A Confluence or other wiki-style page that can be regularly updated with revisions tracked.

Enforcement

To ensure that documentation is consistent and effective, the CoE needs to

  • Establish a centralized location for documentation
  • Provide a documentation template or example for new factories
  • Inform all bot developers that the documentation is required and validated in order for a bot to release into production.

Release Management

Everything mentioned so far are standards to assist developers and the CoE prior to moving code into production. The process of releasing bots into production is also critically important. Make sure to iron out this process well before any factories are ready to push their code. Let’s look at a couple questions that a CoE should ask in preparing for this release management process.

How often are we allowing releases?

This depends on the release management policies of your IT team. Some organizations are extremely strict, while others don’t have a clearly defined release calendar. The CoE should work with the factories to establish a release cadence that allows factories to meet their automation goals while preventing daily pushes to fix production issues due to inadequate testing or poorly gathered requirements.

As an example, you may say that the code release window is every Thursday (so weekly releases) with a code freeze day of the preceding Monday close of business. In this way, factories are required to IT and QA test their automation prior to CoB Monday leading up to a code freeze and are unable to make any modifications to the code after that date. This can give a balance between flexibility of the CoE to assist with releases and the due diligence required by a factory to properly test, validate, and document their code.

What documents are required for release?

This could be a combination of what IT already does and what the release management team determines is appropriate for a bot release. Identify the set of documents that are needed to clearly communicate that the bot has been documented, tested, code reviewed, and signed off by manager or lead of the factory. These documents can be formal or informal. The point is that the CoE must take the time to think through what should be required and shared details and sample documents with factories.

What happens if a bot isn’t working in production?

Regardless of the amount of testing performed, inevitably situations arise where production systems don’t match systems in test environments, application configurations are wrong, or the bot flat out isn’t working like it should. The CoE should be prepared to receive requests for emergency fixes into production. Weigh the pros and cons of allowing emergency fixes and define what documentation is required to support them.

It’s also important to understand and address the underlying issues that led to the emergency. Not enough testing? App inconsistencies between environments? Not enough error handling? Help the factory identify the issue and ensure they are working towards mitigating future issues.

Reusability

Chronologically not the final thing to be considered – but a fitting consideration to discuss after code is in production. Developing for reusability is all about how processes can be broken down into components that may find use/re-use across multiple automation opportunities. Creating a bot to automate the assignment of cases in Salesforce? What components of that bot (or package) may find reusability across other Salesforce related automation opportunities? There are likely many – but for the purposes of example, lets focus on an easy one: Authentication. Authenticating for the Salesforce case assignment bot likely follows an identical pattern to authenticating for every Salesforce automation opportunity (albeit with different credentials). Instead of building out the steps to automate that login over and over again for every Salesforce bot, consider creating a reusable bot or package that other developers – including yourself – can use in the future for new Salesforce automation opportunities. In this way, the efforts to automate a process can be focused on what is unique to that process as opposed to the common tasks that all processes share.

How can I communicate that a reusable component exists/is available for others to use?

This can be a real challenge – especially in a fully federated development environment where different federated teams may not regularly interface due to geographical or organizational challenges. Fortunately, Automation Anywhere has a solution – and that’s the integration with Automation Anywhere’s Private Bot Store. Private Bot Store enables the documentation and sharing of reusable components by providing an interface for developers to list their reusable components. Once shared, other developers can search through Private Bot Store bot and package listings to see what internally created reusable components exist, details around how those components work, and contact details for the original creator of said components should they have any questions.

How does this work realistically in practice?

Its really up to the CoE to decide – but one approach would be making shared components available in a central sub-directory of the Bots directory in the Control Room. Based on the size of the RPA practice, a CoE lead can determine the specifics of providing access to that directory – but most everyone would need read access to that directory while only 1-2 developers per federated group may need write access (again, as a CoE lead – use your best judgement here on providing access to all without introducing the risk of inexperienced developers screwing up the shared content). When a bot builder finds content on Private Bot Store that they are interested in using within their build – they can review the details of that bot/package, and follow the on-screen instructions on how to find those resources within their own Control Room. As developers consume reusable content, development times accelerate as time isn’t spent re-creating internal solutions that have already been created. As developers take on new automations which leverage applications that don’t yet have reusable resources created – they should focus on developing their solution in a modular way to share as much content that may have reusability as possible.

Summary

Clearly defined development standards enable cleaner production releases, easier supportability of bots, and an auditable release process that tracks the evidence of testing and documentation. Without these things clearly defined, factories will submit all different qualities of code and push to production with varying degrees of success.

It’s important that the CoE not only establish good practices for development and deployment, but that they also model these practices for factories to mimic. With a well-defined bot shell, documentation templates, and release management standards in place, factories and the CoE can focus on delivery as opposed to re-creating the wheel every time a new bot is preparing to go live.

Resources