Learning Journey

Establishing Development Standards

  • 25 March 2021
  • 0 replies
  • 217 views

Userlevel 6
Badge +9

Objective

 

In this learning experience, we’ll talk about how to establish consistent development standards for all teams involved in RPA. Developing and defining best practices for developers is one of the core responsibilities of the Center of Excellence. These best practices should reduce bot development times, standardize logging and reporting for RPA key performance indicators, and pave the way for scaled development. This is an immense responsibility for a CoE, especially when just getting started, but make sure you don’t skip this one! We’ll dig into how to operationalize these standards within your organization. We’ll look at what a CoE should focus on when establishing a consistent, auditable, and supportable bot environment, as well as learn what “Establish Development Standards” actaully means. Start doing this when your CoE is centralized and see how much easier it is to roll out the same practices to all teams when you expand to a federated model.

 

Bot Shells

 

What is a Bot Shell?

 

Most standardization will be driven by your bot shell. You may hear this called many things, including bot shell, bot framework, or bot template. Every time a developer builds a new bot, they start with a copy of the bot shell, which already includes your standard analytics, reporting, and other components within it. In this way, you can do common tasks like logging, log management, error handling, and reporting in a standardized way. The bot shell makes it easier for newly onboarded developers and federated factories to get started. Let’s look at characteristics of an effective bot shell and how this can be implemented in your CoE.

 

Error Handling
 

Error handling is an essential element of an effective bot shell. Without error handling, a bot runs on a headless bot runner hosted on a server that no one is actively watching. If an error occurs on the bot run, a pop-up will show the details of the error, but if no one is watching, will wait indefinitely for someone to address the pop-up. Worse yet, it will not log the cause of the error, so when you reboot the machine, you lose the error details. While seasoned developers are familiar with how crucial error handling is, new developers and citizen developers may not be. Supplying new developers with a template that already includes basic error handling will help prevent them from causing issues to other scheduled bot runs and ensure that bot errors can be diagnosed and fixed.

 

Logging

 

It’s not exciting or glamorous, but one of the best things you can do for the supportability of your RPA practice is to standardize logging and log management are done. Across ALL bots, teams should save logs to a consistent location and should follow a standardized pattern of logging for readability. Teams should regularly purge log files at regular intervals. Logging goes hand in hand with error handling. When an error occurs, we want to do a few things every time:

 

  • Trap it. We don’t want the error causing issues on the bot runner that could impact the schedules for other bot runs.

 

  • Log it. We want to log all the details about the error that we can, including noting:

    • What line did the error occur on?

    • What was the error message that occurred?

    • What did the screen look like at the time of the error? (Using the Screen package’s Capture Desktop action)

      • These screen captures can be very helpful in understanding why the bot failed, but be sure to store securely, or opt to not use bots where PII data may appear on screen.

    • On what date, time, and bot runner did the error occur?

 

  • Save and close the log. Understanding where to save logs is key. Your bot shell should define a consistent pattern for the logging locations of each bot. A common approach is to save the logs in the program data directory (similar to where Automation 360 product logs live) in subfolders named after the bot name and factory (or department) name.

    • Example:

      • Log Files: C:ProgramDataAutomationAnywhereBotsLogsInvoiceProcessingBot-FinanceFactoryLogs

      • Snapshot Files: C:ProgramDataAutomationAnywhereBotsLogsInvoiceProcessingBot-FinanceFactorySnapshots

 

In addition to logging errors, it’s also important to set a standard for how to use audit logging. Many bot developers use audit logging to track the progress of their bot runs. For example, in human language, “Log the fact that I made it into this loop and how many items were found.” Audit logging is tougher to enforce using a bot shell since it’s dependent on the specific bot, but be sure that your bot shell at least establishes a pattern for creating a file for and logging to an audit log for each bot execution. In practice, this often means having the bot shell create a new log for this bot for each date. Any bot runs that occur on that date would continue to append to that same file. Each bot stores its log files in a bot-specific folder to ensure there is never any confusion or overwriting of one bot’s logs with another. This formatting becomes especially important as we consider log management.

 

Log Management

 

Many people think to include logging within their bot shells, but very few think to include automatic log management. What is automatic log management? It means that my bot shell should automatically be cleaning up older log files. You want to be sure that the drive is not becoming infinitely filled with bot logs. Exactly how you do this is dependent on logging standards in your organization. Consider including log management in your bot shell that looks for any log files older than X days old (typically 30, 90, or 180 days) and deletes them. This log management capability can get executed on every bot run before the bot executes its own logic. At the beginning of the bot run, also make sure to create the log files and folders that the bot expects to write.

 

Reporting

 

We cover reporting in depth in the ‘Defining Success’ learning experience. For now, the considerations for reporting are like those for logging. Your bot shell should automatically log the results of the bot’s execution to your centralized reporting repository (Bot Insight or otherwise) with standardized details about the bot’s execution. This might include things like the start and end times, the bot name, the factory or department the bot belongs to, and the bot runner machine.

 

Miscellaneous Features

 

Other functionality may also be helpful to include in your bot shell. This might include things like opening up a support ticket if the bot fails, establishing a pattern for the use of Bot Insight, sending an email to the factory distribution list on bot completion, sending an SMS to the support team, or ensuring that you have mapped the network drives on the workstation. There’s really no end to what you could include in this shell.

One word of caution, though: Don’t make the bot shell so burdensome that one needs a degree in computer science to figure out how to use it. You want your bot shell valuable and light. The biggest benefit of using a bot shell is that it accelerates development. Developers can add logic specific to their automation opportunity as opposed to having to deal with commonly performed functions like logging and error handling, so be sure that any additional features added to the bot shell will not take away from its ease of adoption.

 

Documentation

 

Documentation is key to a successful federated RPA practice. As a CoE you must enforce consistent documentation. Without consistent documentation, it will be impossible for your enterprise cybersecurity, audit, support, or other bot development teams to know what a bot is doing, what applications it interacts with, and what reusable components it might leverage. Think of documentation as a living, breathing record. Update the documentation when you update the bot. In this way, you won’t run into issues with people referencing an old version of the documentation and getting outdated information. You also reduce the amount of documentation developers need to do for each release. As a starting point, we recommend creating a documentation template (or using ours in the Private Bot Store) and including the following for all bots.

 

Bot Purpose

 

Documentation should surround the business problem the bot is designed to solve. You want readers to understand why the bot exists and what business value it provides. Other factories and bot builders can also reference this document to understand the original use case and the bot’s solution.

 

Flow Chart

 

Flow charts can be less detailed than the flow view of the bot in Automation 360, but should include what the bot does and how it interfaces with other applications and services. If the bot gets its list of tasks from a workflow application before it updates another internal web app, for example, those are the kinds of connections you should document.

 

Accounts Used

 

Should the original creator of this bot not be available, what information would another developer need to troubleshoot and update the bot? Knowing what accounts (service and application) the bot is using, what credential vaults or lockers those values come from, and what group to get in contact with to resolve issues related to bot authentication in those various apps are key components.

 

Where to Store Your Documentation

 

The CoE needs to set guidelines for storing documentation. Options include:

 

  • The Private Bot Store. This is the most RPA-specific solution, with custom documentation fields for automations. You can create custom filters to organize bots by department, factory, and other characteristics.

  • A centralized SharePoint with sub repositories for each factory.

  • A confluence or other wiki-style page that can be regularly updated with revisions tracked.

 

Enforcement

 

To ensure that documentation is consistent and effective, the CoE needs to:

  • Establish a centralized location for documentation.

  • Provide a documentation template or example for new factories.

  • Inform all bot developers that the documentation is required and validated in order for a bot to release into production.

 

Release Management

 

You have learned the standards designed to assist developers and the CoE prior to moving code into production. Next, we will learn another critically important process: releasing bots into production. It’s vital to iron this process out well before any factories are ready to push their code. Let’s look at a couple of questions that a CoE should ask in preparing for this release management process.

 

How often are we allowing releases?

 

This depends on the release management policies of your IT team. Some organizations are extremely strict, while others don’t have a clearly defined release calendar. The CoE should work with the factories to establish a release cadence that allows factories to meet their automation goals while preventing daily production issues caused by inadequate testing or poorly gathered requirements. As an example, you may say that the code release window is every Thursday (so weekly releases) with a code freeze day of the preceding Monday, close of business. In this way, factories are required to IT and QA test their automation prior to CoB Monday leading up to a code freeze and are unable to make any modifications to the code after that date. This can give a balance between flexibility of the CoE to assist with releases and the due diligence required by a factory to test, validate, and document their code.

 

What documents are required for release?

 

This could be a combination of what IT already does and what the release management team determines is appropriate for a bot release. Identify the set of documents that are needed to communicate clearly that the bot has been documented, tested, code reviewed, and signed off by manager or lead of the factory. These documents can be formal or informal. Either way, the CoE must take the time to think through what should be required and shared details and sample documents with factories.

 

What happens if a bot isn’t working in production?

 

Regardless of the amount of testing performed, situations will inevitably arise where production systems won’t match systems in test environments, application configurations are wrong, or the bot isn’t working like it should. The CoE should be prepared to receive requests for emergency fixes into production. Weigh the pros and cons of allowing emergency fixes and define what documentation is required to support them. It’s also important to understand and address the underlying issues that led to the emergency. Was there not enough testing? Were there app inconsistencies between environments? Or not enough error handling? Help the factory identify the issue and ensure they are working towards mitigating future issues.

 

Reusability

 

Not the last thing to be considered–but a fitting consideration to discuss after code is in production is reusability. Developing for reusability is all about how processes can be broken down into components that may find use/re-use across multiple automation opportunities. Creating a bot to automate the assignment of cases in Salesforce? What components of that bot (or package) may find reusability across other Salesforce related automation opportunities? There are likely many, but for the purposes of giving an example, let’s focus on an easy one: Authentication. Authenticating for the Salesforce case assignment bot likely follows an identical pattern to authenticating for every Salesforce automation opportunity (albeit with different credentials). Instead of building out the steps to automate that login repeatedly for every Salesforce bot, consider creating a reusable bot or package that other developers–including yourself–can use in the future for new Salesforce automation opportunities. In this way, you can focus your efforts to automate a process on what is unique to that process, as opposed to the common tasks that all processes share.

 

How can I communicate that a reusable component exists/is available for others to use?

 

This can be a real challenge–especially in a fully federated development environment where different federated teams may not regularly interface because of geographical or organizational challenges. Fortunately, Automation Anywhere has a solution–and that’s the integration with Automation Anywhere’s Private Bot Store. The Private Bot Store enables the documentation and sharing of reusable components by providing an interface for developers to list their reusable components. Once shared, other developers can search through the Private Bot Store bot and package listings to see what components exist, details around how those components work, and contact details for the original creator of said components should they have questions.

 

How does this work realistically in practice?

 

It’s really up to the CoE to decide. One popular approach is to make shared components available in a central sub-directory of the Bots directory in the Control Room. Based on the size of the RPA practice, a CoE lead can determine the specifics of providing access to that directory, but most everyone would need read access to that directory while only 1-2 developers per federated group may need write access (again, as a CoE lead, use your best judgement here on providing access to all without introducing the risk of inexperienced developers screwing up the shared content). When a bot builder finds content on the Private Bot Store that they are interested in using within their build, they can review the details of that bot/package, and follow the on-screen instructions on how to find those resources within their own Control Room. As developers consume reusable content, development times improve as they do not waste time re-creating internal solutions someone else already created. As developers take on new automations which leverage applications that don’t yet have reusable resources created, they should focus on developing their solution in a modular way to maximize reusability.

 

Summary

 

Clearly defined development standards enable cleaner production releases, easier supportability of bots, and an auditable release process that tracks the evidence of testing and documentation. Without these things clearly defined, factories will submit different qualities of code resulting in varying degrees of success. It’s important that the CoE not only establish good practices for development and deployment, but that they also model these practices for factories to mimic. With a well-defined bot shell, documentation templates, and release management standards in place, factories and the CoE can focus on delivery as opposed to re-creating the wheel every time a new bot is preparing to go live.

 

Resources

 

Videos

Build a Bot Shell


0 replies

Be the first to reply!

Reply