Our white papers provide insights into technology and cyber trends and innovations in the public sector.

Robot Process Automation (RPA) Demystified: Understanding RPA and Its Applications

INTRODUCTION TO RPA

The computing industry has evolved from mainframe computers that physically took up entire buildings, to the microprocessor in the 1970s, to present-day smartphones and tablets that have more processing power than mainframe computers ever had. This evolution introduces more opportunities to automate our daily tasks and workflow, examples of which we have seen throughout history: automated software testing and deployments, sending emails and message routing, voice command software (e.g., Amazon Alexa, Google Assistant, and Siri), the list is endless. RPA is a current trend and is being viewed in the information technology (IT) industry as the next step in the evolution of automation.

WHAT IS RPA?

RPA is the process of automating tasks or workflows that don’t require inference or insight, eliminating simpler types of tasks from humans. These tasks are responsibilities that can be performed through business rules or codified logic and are usually represented as scripts or “bots.” RPA is being widely integrated across many different business applications and problems, unbeknownst to many of us. Any task that can be automated with business logic or a programming language can be considered an application of RPA.

What is new for many organizations is how RPA is transforming enterprises with the integration of artificial intelligence (AI). AI is the ability for computer programs to complete tasks that normally require cognitive or human intelligence. Speech recognition, automatic photo recognition, and advanced decision making are all examples of AI. The integration of RPA and AI introduces powerful possibilities to solve complex business problems. For example, an AI application can be built to recognize people and objects in photos (e.g., facial recognition, and object detection) and integrated with a RPA process that will automatically send notifications to users when an object or person is identified. AI can provide complex and detailed analyses to make business decisions, and RPA can be the facilitator that sends decision outcomes to recipients and users.

While the integration of RPA and AI can solve problems that require complex reasoning and analysis, RPA tasks can be implemented independently without AI and still have a broad impact on an organization. A prominent example is eliminating the need to groom or prepare data prior to data analysis or data ingestion. It’s an unfortunate fact that data analysts and scientists invest a lot of time preparing data into specific formats before more complex and interesting analysis can be performed. Ingestion of data across different formats, sizes, and types also requires a lot of data preparation and grooming before data enters a database or data platform. RPA can be leveraged to automate these tasks altogether, freeing up analysts, workflow managers, data scientists, and software developers from this type of uninteresting and monotonous work.

APPLICATIONS

RPA adoption and integration can be applied across many federal and commercial practices. While not inclusive of all industries or problem spaces, examples of where RPA could have significant impact include banking and insurance, regulatory industries and organizations (e.g., the Securities Exchange Commission, and the Consumer Financial Protection Bureau), security (e.g., Authority to Operate [ATO] documentation and process automation), acquisitions (e.g., IT Acquisition Review), and patent examination and review (e.g., the US Patent & Trademark Office).

BENEFITS

  • Labor savings on data entry and analysis tasks – RPA removes the need for analysts to perform time-intensive data grooming and preparation tasks, which results in labor savings while also enabling analysts to focus on more critical or urgent workflows and analyses.
  • Reduction in workload – RPA can perform simple tasks, enabling staff in an organization to focus on complex problems and decision making, ultimately reducing their workload and stress, and providing a better work experience.
  • Accuracy and quality – minimize workflow errors and improve accuracy rates.
  • Improved compliance – data governance and other types of compliance (e.g., regulatory and financial) can be enforced through RPA.
  • Flexibility and reusability – RPA components are reusable and more modular than standard macros and scripts. RPA applications can also be reused across an organization and its business units, providing scalability and reuse across an enterprise.

CONCLUSION

When assessing if RPA is a good candidate for your organization, it’s important to first understand the problems your organization is trying to solve. There are many products in the market that are focused on RPA, but there is not a “one-size-fits-all” product or model in the industry for complete RPA implementation. Implementing RPA solutions requires detailed analysis into requirements, workflow management, and the data within an organization. Once RPA solutions are implemented and scaled across the enterprise or within a small business unit, your workforce will immediately see the long-term benefits and a more focused and leaner workflow.

eGT Labs is prototyping RPA and AI solutions, and our thought leaders have already developed solutions available for many different types of challenges, including automated cloud deployments and migrations, and complete DevOps and development workflow automation.

Contact us at info@eglobaltech.com to find out how you can leverage RPA and AI at your agency! 

 

 

Copyright 2018 | eGlobalTech | All rights reserved.

Disrupting Government Healthcare with Blockchain

BACKGROUND

The healthcare industry has witnessed a surge of innovation in the past few years, fueled by advances in information technologies such as cloud, mobility, and data analytics. However, with the advent of blockchain, there has been a sudden surge of new and disruptive ideas promising generational transformation of how business is conducted in the healthcare industry. Although there is a lot of hype, blockchain has some interesting and practical applications that need to be carefully studied and evaluated. This paper will focus on the practical applicability of blockchain to government healthcare, addressing both strengths and weaknesses and how the adoption of blockchain can achieve process efficiencies, risk mitigation, and cost reductions in the long-term.

WHAT IS BLOCKCHAIN

Designed to eliminate centralized trust-based authorities, blockchain is a decentralized network that empowers a participant to conduct a secure transaction in confidence–replacing trust with enforced consensus and verification. Participants utilize Public Key Infrastructure (PKI) encryption to securely execute a transaction, which is then verified or proofed by other nodes in the network. That the transaction is then added as an immutable block into the blockchain and stored in a distributed ledger. A consensus-driven, “single source of truth” replicated in each node of the blockchain shields it from security attacks that seek to change or alter data. The decentralized and immutable nature of blockchain is its core strength, ensuring security integrity and achieving process efficiencies.

BLOCKCHAIN’S VULNERABILITY

Blockchain ensures the integrity of transactional data through cryptography techniques such as Public Key Infrastructure (PKI). Each participant or user is in possession of a visible public key and a secret private key, which are used in conjunction to asymmetrically encrypt a peer-to-peer transaction that can only be decrypted by the receiving user. However, there is a catch, participants have to have to secure access to their private keys otherwise the blockchain is no longer secure.  For example, the theft of bitcoins reported in the media recently was possible because the attacker managed to gain access to those private keys and conveniently transfer the bitcoins to the attacker’s account. Bitcoin is only one application of a public blockchain for conducting financial transactions.

PUBLIC VS PRIVATE BLOCKCHAINS

Public blockchain, also known as unpermissioned blockchain, permits participation and visibility to all transactions in the network and is more pertinent to use cases such as digital currencies. It promotes anonymity, uses considerable computing resources, and consumes significant amounts of time to complete transactions–all undesirable characteristics for an enterprise or established industry. Private blockchains, also known as permissioned blockchains, limit participants and have a predefined set of validators verifying each transaction before it is added to the immutable block. Private blockchains also require and enforce participants to be appropriately permissioned to perform transactions and access data. As a result, private blockchains are better performing and still achieve efficiencies that could redefine how we conduct digital business in nearly every industry–including healthcare.

Key benefits of blockchain relevant to federal agencies include:

  • A reduced risk profile because of the distributed ledger model that by design is immutable and  incorruptible;
  • Increased efficiencies with the elimination of centralized databases and the accompanying management burden; and
  • Decreased long-term operational and management costs as there is a shared responsibility across the participants in the blockchain.

THREE WAYS TO DISRUPT GOVERNMENT HEALTHCARE WITH PRIVATE BLOCKCHAIN

Below are potential federal use cases for increasing innovation in federal healthcare with blockchain:

1. Own Your EHR and Decide Who Has Access

Envision patients having the ability to be in complete possession of their electronic health records (EHR) and retaining the power to dynamically grant and revoke access to providers. Protected health information (PHI) is currently stored in siloed databases managed by each provider and is just one data breach away from compromising health privacy and the identities of countless patients. Although it would not be feasible to put the entire EHR of a patient in a blockchain (it would put a huge burden on storage), the metadata of a patient’s EHR along with access control permissions could be stored and transacted through blockchain. This limits access to patient health information, which would be securely shared with privileged providers.

2. Secure Information Exchange Within a MAC Consortium

Information exchange within a consortium of Medical Administrative Contractors (MACs) and the Center for Medicare and Medicaid Service (CMS), such as strongly structured claims data, could be securely shared and processed through a permissioned private blockchain. It reduces the security risk profile to CMS and improves efficiency by no longer having a single datastore.

3. Quality-Based Payment Models

As alternative payment models take off, quality measures can be codified and a point-based transaction process through blockchain can be implemented. This could be further enhanced with smart contracts that express scoring algorithms, automatically executing transactions (according to the criteria for awarding and deducing provider points). This approach could significantly reduce the workload burden.

CONCLUSION

Blockchain holds a lot of promise but is not a panacea to solve all the technical and security issues that we face. It’s critical to understand the technology at a deeper level so it can be applied in the most successful way. Amidst the seamlessly never-ending hype, it is important that agencies collaborate with the private sector to jointly experiment, prototype, measure, and learn through practice. This approach would ensure agencies are building blockchain applications for the right use cases, therefore, maximizing their investment.

As federal agencies gain a better understanding of blockchain and start to conceptualize potential use cases, they are bound to encounter many questions and challenges, such as:

  • What new security controls should we define and consider for blockchain applications?
  • How do we estimate the cost to build, operate, and maintain a blockchain application?
  • How do we determine and rationalize the investment in blockchain?

At eGT Labs, the R&D arm of eGlobalTech (eGT), we are researching and conceptualizing answers to these questions, while also prototyping use cases that can be built using permissioned blockchain platforms such as Hyperledger Fabric and IBM Blockchain. Our focus is on federal use cases around secure data sharing and identity management.

Contact us at info@eglobaltech.com to find out how you can leverage blockchain at your agency! 

 

Copyright 2018 | eGlobalTech | All rights reserved.

Human-Centered Design Delivers Focused and Meaningful Solutions

,

Human-Centered Design (HCD) begins with obtaining a deep understanding of customers’ needs and leveraging creativity and continuous feedback to better realize useful and tailored solutions. HCD provides a framework for connecting the best ideas to actual user needs – thereby producing successful solutions that last. 

BACKGROUND

Too often service and solution providers make assumptions about the wants and needs of their client base, trying to overlay a technological solution to an issue without context. As solutions rely more on advances in artificial intelligence and robotics, it will be crucial to capture and maintain focus on the human element. Without direct interaction with the client to empathize and fully understanding the issue in context of their environment, organizations may develop solutions that do not fully respond to or resolve end-user business needs or requirements. These extraneous solutions can be wasteful, costly, and frustrating to the end-users.

HCD facilitates an interactive development approach aimed at making systems more usable and useful by focusing on the users, their needs and requirements, and by applying human factors, usability knowledge, and iterative techniques. HCD strives to create innovative products, services, and solutions through creative and collaborative practices.

In an agile environment, organizations can no longer rely on traditional approaches. By employing HCD, developers can create solutions that align to a customers’ values and build products or services that are more effective and intuitive for them to use.

HUMAN-CENTERED DESIGN ENGAGES THE CUSTOMER THROUGHOUT THE ENTIRE LIFECYCLE

The HCD process has three phases – the Inspiration Phase, the Ideation Phase, and the Implementation Phase.

gears showing Human Centered Design process

Inspiration Phase

During the Inspiration Phase, the focus is on learning directly from the client through immersion in their environment. The Inspiration phase is about adaptive learning, being open to creative possibilities, and trusting that by adhering to the needs of the client, the ideas generated will evolve and result in the right solution.

Ideation Phase


Synthesis

Synthesis brings together the needs and requirements learned during the Inspiration Phase and organizes them into themes and insights. The outputs from Synthesis are used to identify and target the best ideas for development into opportunities to prototype and test.

Prototyping

Following the Synthesis of ideas into opportunities, the second part of the Ideation Phase is prototyping; expanding ideas into testable processes, products or services. This cyclical process of testing prototypes, getting feedback, and iterating is important to create an effective, innovative solution in the end. HCD leverages the prototype or pilot approach as an important tool designed to test the desirability, feasibility, and viability of solutions with clients at a small scale with minimal risk.

Whereas user-centered design focuses on improving the interface between users and technology, HCD concentrates on actively involving the client recipient all throughout the improvement and development process.

Implementation Phase

During the Implementation Phase, special attention is paid to how the decided upon solution will impact the client environment and how it will be implemented. Long-term success may require incremental change, therefore understanding the target audience and considering change management are paramount.

HCD IS MORE SUCCESSFUL BY GIVING OWNERSHIP AND CONTROL OF THE SOLUTION TO THE CUSTOMER

Even after a solution is implemented, HCD encourages iterative, post-implementation feedback gathering and continuous refinement of the concept to best meet the end user’s needs.

Using a Human-Centered approach to design and develop has substantial benefits for IT organizations and end users. Highly usable systems and products tend to be more successful both technically and from a usability perspective. Solutions designed using human-centered methods improve quality by:

Increasing productivity of users and operational efficiency of organizations
Creating systems that are more intuitive, reduces training and support costs
Increasing usability for users with a wider range of capabilities, increasing accessibility
Improving user experience, reducing discomfort and stress.

Contact us at info@eglobaltech.com to find out how you can build successful technology solutions with our HCD framework!

 

Download White Paper Button

 

Copyright 2018 | eGlobalTech | All rights reserved.

Modernizing Legacy Applications With Microservices

BACKGROUND

Front page microservices white paper imageIn both commercial and government information technology (IT), it is common to see large, monolithic applications grow in size and scope at a rapid pace to a point where the application becomes nearly unmanageable, unsustainable, and unable to adapt to changes in the business environment. The more successful an application is initially, the more it is likely to grow in its capabilities until the application operates well outside of its original planned scope and is burdened by too many features. Many of these solutions are developed as monolithic applications where most features are tightly coupled into a singular environment. In situations like this, the monolithic environment may become an impediment to progress, slowing down the software engineering team’s ability to enhance applications and resolve defects. Microservices present a new design approach that keeps solutions small, modular, and promotes extensibility. These benefits are further enhanced when paired with serverless computing which adds tremendous scalability and performance, and drastically reduces the cost of ownership with little effort.

WHAT ARE MICROSERVICES AND SERVERLESS COMPUTING? WHY DO THEY MATTER?

Microservices are discreet, self-contained, easily deployed, and easily managed services that operate on their own but are designed to be integrated into larger solutions. The key to microservices is that they serve a specific, well-defined purpose, typically determined by decomposing business capabilities down to a single verb, noun, or use case. A microservice is built to operate in its own environment and communicate with other microservices using highly efficient and loosely-coupled communications. They are usable by web, desktop, and mobile applications as well as other microservices.

Serverless computing enhances microservices by providing an ideal platform that scales up and down instantly and with little to no interaction. Serverless computing differs from standard cloud web hosting in that the developer does not need to manage the server. The code is deployed to endpoints and the service fabric then completely manages the code. In times of high utilization, the service fabric increases the number of instances to meet performance demands. As the workload decreases, the service fabric automatically decreases the number of instances. Serverless computing is an ideal underpinning for highly modular solutions such as microservices. As a bonus, serverless computing service charges are incurred only while they are running. Combined, microservices architecture and serverless computing provide highly secure, extensible, and scalable environments while also providing tremendous cost efficiency.

HOW ARE THEY DIFFERENT FROM DEVELOPING CODE COMPONENTS?

Many developers attempt to build modular systems using code-level components. These components are dropped into existing applications and compiled and deployed with each application they are used in. On the positive side, this will result in some reuse and will also provide good performance. On the negative side, this approach is not as reusable as microservices and it leads to configuration management challenges where different projects may require different versions of the component. Once a library is updated, applications that use the components must be updated, recompiled, and redeployed, even if it is for a minor change.

Microservices are self-contained. They are built and compiled as a separate project. They are deployed in their own containers and typically have their own databases. They are not compiled directly into solutions but are standalone services designed to be connected via standards-based, loosely-coupled communications protocols such as Representational State Transfer (REST) and JavaScript Object Notation (JSON). In most cases, microservices are carefully versioned, allowing different applications to continue to operate even if a newer version of the service is released and without necessarily requiring recoding, redeployment, and rework.

For example, a microservice may be developed to provide a catalog of products, another to focus on managing procurements, and another to manage payments. These microservices might be built independently but also provide the core for an end-to-end solution for end-users to browse for products, manage acquisitions, arrange payments, and track progress. When combined, these atomic services can serve many other types of solutions and provide a set of “single truth” per business subject area that can be connected to manage entire business processes. Agile development teams with experience in developing microservice architecture-based solutions will have a keen eye for decomposing solutions into discreet service and connecting them efficiently and extensibility.

APPLY MICROSERVICES AND SERVERLESS BEST PRACTICES

The best approach to getting started with microservices is to identify the right application to modernize that will help establish and capitalize on early successes. The application should be in the right business domain, and with the right contractor to guide your organization through the pilot project. To assure success in the first microservices initiative, the following best practices apply:

  • Identify an application that supports a well-understood business domain that is easily decomposed into microservices.
  • Identify an application that supports processes that are well defined. Having to wait for business process re-engineering may slow the effort.
  • Identify an application that supports processes that are well defined. Having to wait for business process re-engineering may slow the effort.
  • Target applications that already have funding for development or modernization if possible.
  • Engage experts in microservices and serverless solution design with demonstrated experience in fielding microservices.
  • Build and deploy the solution using leading-edge, serverless design platforms such as Amazon Web Services (AWS) Lambda or Azure Functions.
  • Employ compatible development processes, including DevOps and continuous integration/continuous delivery (CI/CD) to achieve maximum value as early as possible.

By transforming a project leveraging microservice architecture on a serverless computing environment and by building it using experts in DevOps and CI/CD, an organization can see excellent results early and often. The result will be faster delivery of actual value, rapid and consistent deployment of working code, lower cost of operations and maintenance, and a higher degree of reusability.

THE POTENTIAL PITFALLS OF MICROSERVICES

Technology employed by inexperienced personnel can lead to undesirable results and disappointment―microservices is not an exception. A common mistake is not carefully defining the scope and purpose of the microservice, allowing it to grow into its own monolith. Tightly binding the services together and allowing too many dependencies makes microservices brittle, prone to refactoring, and reduces extensibility. Conversely, making microservices overly granular creates microservice sprawl. To combat this, federal IT organizations should enlist contractors with proven experience in decomposing applications into microservices. For maximum benefit, the contractor should also possess a proven background in DevOps, CI/CD, and automated cloud deployment.

CASE STUDIES

eGlobalTech (eGT) is supporting the transformation of a mission-critical, Oracle-heavy system at the Department of Homeland Security (DHS) that is utilized for emergency communications to a light-weight microservices architecture. We applied the strangler pattern to incrementally refactor existing application code into microservices and deployed to AWS Lambda, a serverless deployment model. The combination of microservices and serverless is enabling eGT to save the customer from expensive Oracle licenses and optimize the cloud expenditure while simplifying operations and maintenance processes.

At Health and Human Services (HHS), our customer was challenged with managing and maintaining more than six systems for a diverse number of grants programs. eGT implemented a new grants performance management platform applying microservices principles replacing existing systems and consolidating all the grants programs into one common platform. By engineering a flexible design, we can declaratively on board new grants programs and changes to existing program requirements with no changes to underlying software. eGT deployed the first production release in less than four months to Microsoft Azure, and since then has continuously delivered new features and capabilities through pipeline automation.

CONCLUSION

Microservices offer a pathway to get under-performing modernization initiatives on the right track. They offer a quicker yield to tangible results due to their highly modular design. By accelerating the development of high-performing, modern applications, microservices break the cyclical, large-scale modernization pattern experienced by most federal agencies every ten or so years. It can facilitate continuous and perpetual evolution of applications and permanently shield them from becoming obsolete and costly. Microservices deployed on modern serverless computing environments drastically enhance scalability and reliability while providing substantial cost avoidance. But achieving these results requires support from agile development teams with demonstrated experience in transforming monolithic applications into a microservices architecture.

Contact us at info@eglobaltech.com to find out how your organization can begin the
digital transformation to microservices!

 

Download White Paper Button

 

Copyright 2018 | eGlobalTech | All rights reserved.

Best Practices in Data Analytics

Data Analytics graphs and charts

BACKGROUND

Today the average agency within the Federal Government manages dozens, if not hundreds, of data sources and services that drive their mission functions. These data sources are of variable pedigree, quality, and efficacy and are frequently developed in silos presenting substantial challenges. The nature of the data that the Government leverages is becoming increasingly complex with the increased use of geospatial data, large sets of unstructured data as well as streaming data from sources such as Internet-of-Things (IoT) devices. Regardless of these challenges, Government agencies must seek new ways to exploit data to support their missions and improve the return on investment for taxpayers. Effective Data Analytics is crucial to extracting maximum value of information.

Data Analytics is the discipline of identifying, extracting, cleansing, transforming, mining, and visualizing data for valuable information that helps leadership make critical decisions. The effective employment of Data Analytics provides agencies with high-value data regarding their organization, their partners, and their stakeholders. It also helps reduce costs by decreasing the need to build potentially redundant systems that contain data which already exists in authoritative sources but may not have been identified.

eGLOBALTECH’S DATA ANALYTICS

Our Advanced Data Analytics Framework provides an extremely agile approach to discovering, analyzing, and leveraging data – including innovative approaches for data analytics, predictive analytics, and sentiment analysis. Our comprehensive framework focuses on the following best practices:

  • Goals and Metrics – Establishing goals prior to engaging in data analysis is essential to defining the right metrics and for keeping data analysis efforts on target. Similarly, knowing the target metrics keeps the team focused and helps avoid scope creep.
  • Data Pedigree – Understanding the pedigree of your data sources is essential to ensure you are accessing the most authoritative and reliable data possible.
  • Data Virtualization – Data Virtualization uses data in place rather than costly and error-prone file import/export. Using data virtualization wherever possible reduces the need for expensive and time-consuming data loading, thereby reducing costs and data latency.
  • Service Level Agreements (SLA) – Establishing SLAs is an essential element in data analytics to ensure external and partner-owned data sources are maintained with appropriate levels of availability, reliability, data latency, and quality.
  • Agile Analysis – Agile software engineering practices are ideal for data analytics because they promote early prototyping of data followed by increasing refinement. By presenting data to the users early they can help shape the discovery and analysis of additional data. The use of wireframes can drastically improve the efficacy and usefulness of visualizations.
  • Self Service – Effective data analytics solutions provide the right data services and visualization capabilities which enable users to derive answers on demand.
  • Microservices – Use of highly performant, rapidly developed microservices based on open standards such as Representational State Transfer (REST) and JavaScript Object Notation (JSON).
  • Security – Data security is essential for protecting organizational assets and privacy and reducing organizational vulnerability to hacking attempts.

Our leading-edge framework delivers the following key features:

  • Rapid delivery of initial data capabilities via our DevSecOps framework
  • Lowest possible data latency with ideal pedigrees
  • Accelerated delivery and deployment of data services
  • Extensive application of open data standards such as microservices and service bus technology
  • Expertise in advanced data analytics tools such as Cloudera Hadoop, Pentaho, and numerous open
    source visualization technologies
  • Cross-database platform support including Oracle, Microsoft SQL Server, IBM DB-2, Amazon RDS, and
    most leading platforms
  • Self-service Business Intelligence using leading-edge solutions including Tableau and QlikView
  • Advanced analytics using SAS, Cognos BI, Business Objects, and the R programming language

CONTACT US AT:
Info@eglobaltech.com if you would like more information on this topic!

 

 

Copyright 2018 | eGlobalTech | All rights reserved.

DevSecOps Drives Reliable and Secure Software

DevSecOps image for white paper

BACKGROUND

Software delivery in the Federal Government is transforming at the fastest pace since the advent of the Internet. With increasingly tight budgets and growing cybersecurity threats, the government must be able to deliver more with less—and do it more securely. However, many software development projects are plagued by fragmented teams that separate development, operations and maintenance, and security teams into silos. This results in numerous adverse effects that may include untimely delivery, defective code, and vulnerable code. Traditional waterfall methodologies feature a serial chain of phase gates that must be completed in a specific order. Requirements, design, development, and testing are executed as a chain of events, with documentation driving readiness reviews between each phase. In most cases, security scans are executed when software is promoted to production, with vulnerabilities potentially forcing rework and threats to a network.

With DevOps, developers create continuous delivery pipelines enabling them to build, deploy, and test software with every check-in. Code is unit tested, regression tested, deployed, and validated with each build. Highly customized deployment scripts give way to defining infrastructure as code, which massively accelerates application deployment from days to minutes. DevOps reduces technical debt due to deployment throughout development.

DevSecOps represents a refinement of DevOps, highlighting the security aspect. It breaks down barriers by providing an operations and security conscious software development paradigm that fuses development, operations, and security into a streamlined process. DevSecOps incorporates all aspects of security into every facet of development and deployment. Information assurance and cybersecurity activities are integrated into the agile development process, ensuring steady progression against certification and accreditation requirements. DevSecOps provides an integrated approach unifying teams, technologies, and processes for faster, more robust, and secure products.

DEVSECOPS EVOLVED: DEVSECOPS  CENTER OF EXCELLENCE AND EGT LABS®

eGlobalTech (eGT) established eGT Labs as a forward-leaning research and development environment to solve challenging client problems and to create high-value products and services. eGT Labs, in turn, launched the DevSecOps Center of Excellence to develop best practices and technical guidance on successful DevSecOps deployment. The DevSecOps CoE defined the following best practices as critical to deployment:

  • Establish the Culture – DevSecOps is not a singular process, nor is it a single lifecycle. It is an innovative approach to system engineering that focuses on teamwork, integration of cross-cutting concerns, and success through frequent repetition.
  • Coaching Is Key – Coaches should be heavily engaged during early adoption to help transform managers, engineers, and even contracting staff to fully understand DevSecOps and effectively interact with it.
  • Security-First Design – Security should be applied not only to the code, but also to the processes involved in coding. This provides an accelerant from project initiation that helps streamline deployment and delivery.
  • Automation – DevSecOps performed the eGT way features automation at all levels, including code generation, deployment, testing, and security testing to maximizing the impact of DevSecOps.
    Build Often/Deploy Often/Test Often – One of the key aspects of DevSecOps features daily builds and deployments with testing in every build. Products built with DevSecOps are better tested and more secure than products built without it.
  • DevSecOps Friendly Acquisition – DevSecOps thrives when teams are fully integrated and encouraged to collaborate. Enhancing acquisition strategies that favor integration of cross-functional teams is essential to reducing costs and developing better products.

GT Labs developed a framework which enables rapid delivery of secure solutions of superior quality by incorporating security and operations readiness from day one. Our leading, end-to-end framework and toolkit, DevOps Factory®, includes the following critical elements:

  • Implements security-first design and development.
  • Automates security governance and controls consistent with Ongoing Authorization (OA).
  • Secures the continuous delivery/continuous delivery pipeline through authentication, secure storage of build artifacts, key management, etc.
  • Automates security testing, static code analysis, configuration management, incident response and forensics, secure backups, log monitoring, and continuous monitoring and mitigation.
  • Incorporates compliance with FISMA, NIST, and other applicable federal standards and guidelines.

DEVSECOPS APPLIED

A public sector eGT client had a complex geospatial system prototype composed of Microsoft and open source applications with a growing number of ArcGIS services. This prototype was used in a production capacity and encountered frequent outages and performance issues.

To solve this issue, we applied DevOps Factory® to re-engineer the target architecture, implement security-first design, and automate the end-to-end cloud migration process onto our managed AWS infrastructure.

Results included:

  • Migrated and operationalized a secure geospatial cloud ecosystem to AWS within three months, compliant with federal security standards.
  • Securely on-boarded over a dozen complex applications and systems.
  • Seamlessly supported 400+% growth of geospatial services.
  • Achieved 99.99% operational availability.


CONTACT US AT:

Info@eglobaltech.com if you would like more information on this topic!

 

Copyright 2018 | eGlobalTech | All rights reserved.