The eGT Blog is a place we created to share our ideas and knowledge about what we know and what we have learned working in technology and cybersecurity in the public sector. We passionately believe in our federal clients’ missions to make America stronger, smarter, and safer. Therefore, we created a place to depart ideas on how to advance our government’s innovation and modernization.

Announcing Product Update: Cloudamatic 3.0

, ,
Cloudamatic Logo

We are pleased to announce a major product update to eGlobalTech’s Cloudamatic®, a scalable open source solution for automating the complete deployment and orchestration of infrastructure, security, configuration, and provisioning for any application to the cloud.  Used in both the public and private sectors, Cloudamatic shortens application migration cycles from weeks to a single day.

With release 3.0, Cloudamatic includes the following features:

  • Complete Microsoft Azure Migration & Deployment Support
  • Containerization Support Across All Cloud Providers
  • Kubernetes Support Across All Cloud Providers

Find out more about these features here.

5 Ways to Plan and Prepare for a Cyber Audit

Signature on a cyber audit

Why Are Cyber Audits Important?

Cyberattacks are continuously evolving. How well organizations evolve to protect themselves and their clients is dependent on various factors, including the review of their practices, processes, and infrastructure. For the federal government, cyber audits play a central role in keeping agencies secure and prepared for any future threat. While the process of preparing and undergoing an audit can be unwieldy, these audits can highlight gaps and serious areas for improvement.

How to Prepare Your Organization For A Cyber Audit

Leveraging their experience and lessons learned, our cybersecurity experts compiled the five steps you and your organization can take to best prepare for a cyber audit.

1. Establish a communications plan

It is critical to establish a communications plan with all stakeholders in the organization to ensure everyone is aware of their responsibilities and understands the proper flow of information. First, the organization should identify a primary point of contact to lead the effort and serve as the liaison between the auditors and the organization’s stakeholders. Secondly, the organization should identify points of contact within each respective area included under the scope of the audit. All points of contact must be trained on how to respond to audit requests and interviews. The guidance should emphasize that stakeholders must only address the question asked of them and not provide any additional details outside of the scope of the question.

2. Review and understand rules of engagement 

Audits will have formal rules of engagement that provide what will be examined and how these items will be examined. The rules will include important items, such as the amount of access to be given to auditors, the extent to which penetration testing can use offensive capabilities, and overall scope. A clear understanding of how the audit will be conducted and what the auditors can and cannot request will ensure a smoother audit process.

3. Take a full scale and proactive inventory

A significant part of the audit includes the review of the systems that are under the control of the organization. The rules of engagement should clearly define the boundary of the audit and provide the types of items that should be included.. Prior to knowing what is included in the audit, the organization should prepare a full inventory that not only includes all physical devices under operational control, but also their corresponding Authority to Operate documents and management documentation. It is best practice to prepare a clear index that includes all pertinent information in a single location, including a clear list of all systems and software being used, what machines software is installed on, and the license structure of the software.

4. Establish evidence management and clear ownership of items

Due to the high volume of requests that occur during an audit, it is crucial to establish evidence management and clear ownership of evidence for traceability. A helpful way to manage evidence is to establish tracking methods prior to the initiation of the audit and construct a central repository to store all evidence with separate areas based on the type of request (documents, logs, samples, etc.). One method to use is managing the audit requests via a centralized tracking log that includes the specific details of the request and tagging any evidence stored in the repository with a consistent naming convention that correlates to the audit request tracking number. Having these tracking mechanisms in place increases response times during the audit in situations where stakeholders need to refer to specific items for follow up information.

5. Identify the status of all items with plans for updates 

A successful audit includes the proper documentation and demonstrates the organization is following the policies and procedures it previously established. In preparation for the audit, the organization should have all related policies, procedures, and guidance collected in a single location with the appropriate update schedules provided. Additionally, the most recent set of system scans should be provided with the corresponding Plan of Actions & Milestones (POAM) document or the government, risk, and compliance (GRC) documentation needs to be presented.

Protect Your Organization From Cyber Attacks Today

Have an upcoming cyber audit or looking to make your organization more secure? eGlobalTech’s cyber experts can help you prevent attacks with our end-to-end cybersecurity services. Contact us today. 

Improving Your FISMA Scorecard Rating

,
Improve Your FISMA Scorecard Rating with image of digitallock

Insights To Improve Your FISMA Scorecard Rating Today

Federal Information Security Modernization Act (FISMA) Scorecards are a crucial aspect of keeping federal agencies secure. These scorecards measure agency performance in different cyber “areas of concern” and identify weaknesses that could be exploited by cybercriminals. Are you looking to improve your scorecard rating? Our Cybersecurity Solutions experts created this white paper to provide tips and best practices for achieving a high scorecard rating.

Download Your White Paper Today

 

 

 

 

eGlobalTech’s RPA Bot: Automatic SSP Validation for ATOs

,
Hands on computer with Robotic Process Automation on screen

Robotic Process Automation (RPA) can revolutionize the way organizations function, enabling employees to focus on complex problems while RPA bots fully automate tasks like data entry, document reviews, and screen scraping. By automating tasks that don’t require human decision-making or interference, RPA saves organizations time and money.

While many processes would benefit from RPA, this use case will focus on the Authority to Operate (ATO) document review for System Security Plans (SSPs). This process is a great candidate for RPA because its underlying business process is well-defined, the document review steps are repeatable and consistent, the reviews are time-intensive, and the data validation steps do not require human decision-making.

What is an Authority to Operate (ATO)?

An ATO grants a system the ability to operate in production environments on a federal agency’s infrastructure. For a project to acquire an ATO, multiple security documents need to be thoroughly completed and reviewed, including the System Security Plan (SSP), which documents key attributes of a system’s security posture. SSPs are reviewed to ensure system information is accurate, security levels are correct, and access controls are correctly implemented. It typically takes Information System Security Officers (ISSOs) two days to review and validate an SSP and review findings need to be remediated and triaged, which results in the ATO process taking months.

eGlobalTech’s Bot: Automate the SSP Review

To accelerate the review process and empower ISSOs to address system vulnerabilities quicker, we developed an RPA bot using the UiPath tool suite to validate and verify the completeness of an SSP, including system controls for a moderate-level system, the system details, security categorization, information types, security images, and access controls.

After the evaluation, our bot creates a scorecard and populates the results for ATO evaluators. Our bot completes the entire evaluation and scorecard in under four minutes without human intervention. It scans the SSP, highlights errors in various colors, and even writes detailed comments for ATO evaluators (e.g. “a required image is missing in this section”) – all without any intervention.

Impact

With the SSP review completely automated, an ISSO can refocus their time on the review findings and remediating any system vulnerabilities. The bot also accelerates the ATO process by reducing false starts in initial submissions. This use case can be expanded to other security documents within the ATO process, enabling the automated review of all ATO documents and shortening the time it takes for systems to achieve an ATO from months to days.

eGlobalTech and RPA

At eGlobalTech, we think about RPA at the enterprise level. We not only develop bots for our clients – we work with clients to identify opportunities for RPA though process and workflow evaluation, optimize existing workflows, lead bot deployments, and perform ROI evaluations. We also formed a strategic partnership with UiPath, a leading RPA software company that’s deploying bots at over 40 federal agencies. Would you like to see our bots in action or to start an initial workflow assessment? Contact us today.

Want to learn more about RPA? Our RPA Demystified Whitepaper defines RPA and provides use case examples.

eGlobalTech Has Moved!

,

We have exciting news – we’ve moved! To accommodate business growth, eGlobalTech relocated our corporate headquarters from Arlington to the Tysons Corner area on September 16, 2019. Our new headquarters is located at 1900 Gallows Road, Suite 800, Vienna, Virginia.

The new Tysons headquarters will provide a contemporary workplace, including a dedicated space for eGT Labs (our R&D arm). Consolidating four office locations to one headquarters, our new space will empower employees to easily work together and solve problems more efficiently, enhancing customer experience. The new facility will support eGT’s anticipated future growth.

Read the full press release for more details.

 

Compare Cloudamatic To Other Cloud Deployment Tools

eGlobalTech’s Cloudamatic (www.cloudamatic.com) is a 100% open source cloud deployment tool developed by our research and development arm, eGT Labs.

While most cloud deployment and migration tools focus on infrastructure, that is only a fraction of what needs to be done. Cloudamatic not only performs infrastructure migrations (including configuration, orchestration, patching, and monitoring with every deployment), but migrates your applications entirely through automation.

See how Cloudamatic compares to its competitors:

Comparison of Cloudamatic to other cloud deployment tools

Techniques for Designing Microservices

,
Two people designing

Part 2 of our Microservices & Containers Series

In our “Part 1: Microservices & Containers – A Happy Union” blog post, we outlined the benefits of microservices and described how to integrate them with containers to enable teams to build and deploy microservices in any environment. In the next phase of our two-part blog series, we explain approaches for defining and designing flexible microservices. When designing microservices, it’s important to ensure that each service is decomposed into a business capability. As microservices follow a lean and focused philosophy, designing microservices around business capabilities ensures no unnecessary features or functionality is designed or built. This reduces project risk, the need to refactor unnecessary code, and reduces the complexity of the overall product. Since microservices are built around business capabilities, it’s critical to have business stakeholders or users participate in the design sessions.

Defining Microservices

It’s tempting to start implementing small services right away and assume when combined, all services will represent a cohesive and modular product. Before diving into the implementation, it’s critical to understand the complete picture of all services and how they interact with one another to avoid feature creep and unnecessary features that don’t meet business needs. An effective approach is to have key technical staff (usually a lead designer, technical lead, and architect) and stakeholders collaborate using event storming. Event storming enables project implementors and domain experts to describe an entire product or system in terms of the things or events that happens. This empowers both the business and technical staff to have complete control of a problem space and design product services using plain and easy-to-understand descriptions rather than technical jargon.

Using post-it notes, the team arrange events in a rough order of how they might happen, although not at first considering in any way how they happen or what technologies or supporting structure might be present in their creation. Events should be self-contained and self-describing with no concern placed on implementation details. When doing this exercise, it’s helpful to draw a causality graph to explore when events occur and in what order. Once all events are documented, the team then explores what could go wrong in the context. This approach helps you ask the question “What events do we need to know?” and helps you identify missing events, which is a powerful technique to help explore boundary conditions and assumptions that might affect real estimates of how complex the software will be to build. When the team feels all events have been adequately documented, the next step is to document user personas, commands, and aggregates.

  • User Personas
    • User personas document the various users that would use the system. Personas help teams understand the goals of the user performing a given action, which is helpful in the design phase.
  • Commands
    • Commands are a user action or external system that caused an event.
  • Aggregate
    • An aggregate receives commands and decides whether to execute them or not, thus producing other events as necessary.

Once all personas, commands, and aggregates are documented, the team can now see “big picture” on how the entire system or product should work to meet all requirements. This approach is excellent when designing microservices as each event or handful of events can be clearly defined for a microservice. The service author creates a service that accommodates only those events, creating lean business capabilities that have a well-defined scope and purpose. Event storming is also great for both technical and non-technical stakeholders, as the entire system is described by its events. This removes barriers for stakeholders to participate in the design process as the technical implementation details are not discussed. This approach works well for an existing system or new application.

Design Techniques for Microservices

Once a team has all their services defined and organized, they can focus on the technical details for each microservice. The implementation details will be specific to a given service, and below are guidelines that will help when building out a microservice:

  • Develop a RESTful API
    • Each microservice needs to have a mechanism for sending and consuming data and to integrate with other services. To ensure a smooth integration, it’s recommended to expose the API with the appropriate functionality and response data and format.
  • Manage Traffic Effectively
    • If a microservice requires the handling of thousands or millions of requests from other services, it will not be able to handle the load and will become ineffective in meeting the needs of other services. We recommend using a messaging and communication service like RabbitMQ or Redis to handle traffic load.
  • Maintain Individual State
    • If it’s necessary for the service to maintain state, then that service can define the database requirements that satisfy its needs. Databases should not be shared across microservices as this goes against the principles of decoupling and database table changes in one microservice could negatively impact another service.
  • Leverage Containers for Deployments
    • As covered in Part 1, we recommend deploying microservices in containers so only a single tool is required (containerization tools like Docker or OpenShift) to deploy an entire system or product.
  • Integrate into the DevSecOps Pipeline
    • It’s important that each microservice maintain their own separate build and be integrated into the overall DevSecOps CI/CD pipeline. This makes it easy to perform automated testing on each individual service and isolate and fix bugs or errors as needed.

How eGlobalTech Can Help You Deploy Microservices

As outlined in Part 1 of our blog series, eGlobalTech has extensive past performance developing and deploying microservices for multiple clients. Our experience includes containerization through Docker and OpenShift, and we have leveraged containers to deploy microservices across many complex applications. Our Technology Solutions group built and integrated microservices on existing legacy applications, developed new applications using microservices, and migrated legacy architectures to complete microservices-driven architectures. If you’d like to discuss how eGlobalTech can help your organization embrace or implement microservices, please email our experts at info@eglobaltech.com!

USPTO Awards Cloud Containerization Contract to eGlobalTech

Image of white paper cloud cutout being held to blue sky

eGlobalTech was awarded a contract by the U.S. Patent and Trademark Office (USPTO), Office of Infrastructure, Engineering and Operations under the Office of the Chief Information Officer, to advance containerization for USPTO’s IT West Lab Environment.

eGlobalTech applied its deep expertise in cloud deployment automation using a wide array of open-source containerization and orchestration technologies such as Docker and Kubernetes to help the USPTO achieve a cloud smart strategy.

Containers will provide the ability for the USPTO to build features and services consistently across the USPTO OCIO. This will achieve cost savings because consistent environments are easy to maintain, containers facilitate faster software delivery, and containers share and use resources more efficiently, reducing the dependency on virtual machines. These benefits are all the result of implementing a cloud smart strategy.

Interested in learning more about our work? Find out more about our past performances here.

eGlobalTech Joins Tetra Tech

,

We’re thrilled to announce eGlobalTech has joined the Tetra Tech family of companies. Tetra Tech is a leading provider of high-end consulting and engineering services for projects worldwide, headquartered in California.

For 15 years we’ve provided our clients with innovative solutions and cutting-edge technologies. Today is a new chapter in our story; this acquisition combines Tetra Tech’s mission expertise with our high-end IT consulting services, providing new and exciting opportunities for both our employees and clients. We look forward to continuing to serve our clients as we join the Tetra Tech family.

Read the full press release from Tetra Tech for more information.

Part 1: Microservices & Containers – A Happy Union

,

Software applications have been following a standard design approach for decades: teams build and configure databases, they implement server-side services and features, and they develop a user interface that makes interactions between their application and users possible. These three main components are usually complex, have many interdependencies, and can be an entire application or system. As applications evolved and software teams experienced attrition over the years, these systems often turned into monoliths that are difficult to maintain and upgrade. Challenges such as “dependency hell” can emerge, where it becomes difficult to track how various components interact and send/consume data to each other. This ultimately makes dependency management a full-time job for teams, as modifying one area of the application can have an unexpected outcome or behavior in another part. Another challenge is adding new features to the application. Having many interdependencies across components can make it challenging to understand the responsibilities of each application component, and where a responsibility begins and ends. This adds an increased burden for teams who are maintaining the application and scaling it for future requirements.

Microservices are a design approach where components of an application are broken down into lightweight, independent services that communicate through Application Programming Interfaces (APIs). They can maintain their own state, manage their own databases, or remain stateless. Microservices focus on solving specific domain or business capabilities and should be granular. This approach comes with many benefits, including:

  • Ensuring a modular design
  • Decreasing the risk of failure in one service impacting another
  • Updating and enhancing services are straightforward and focused
  • Deploying services independently and easily
  • Selecting the technology that best fits the requirements of that service

In contrast, traditional and monolithic applications need to be completely rebuilt and deployed when components change, lose their modular structure over time, require scaling the entire application over individual components, and eliminate flexibility with technology choices. There are many topics to think through when approaching microservices, such as the design of each service and whether you’re migrating a monolithic application or building one from scratch.

The Integration of Microservices & Containers

Containers have become ubiquitous in software development and deployments, and our Federal clients have embraced containers over traditional virtual machines. Containers provide the ability for development teams to build features and services for an application and guarantee that these features will work in every environment – including development, testing, production, and both virtual and physical servers. They ensure a definitive separation between one another while sharing resources, enabling containers to run on the same server, but run in isolation and not impact each other if there’s a technical issue. Containers can also be ephemeral and be created or destroyed easily. This enables teams to easily deploy and test new features in isolation and in any environment without impacting another developer’s workflow or other components of the application.

A container will maintain its own run time environment, tools, databases, and APIs – creating a completely isolated environment for service development. This provides a natural approach for creating and deploying microservices, while incorporating microservice development into a team’s DevSecOps pipeline and workflow. A developer within a team can develop their microservice and use Docker or OpenShift to create a container in seconds to run, test, fix bugs, and deploy their microservice. Once the developer is finished, they can destroy the container instance in seconds with no impact to other team members or other features within the application. This process speeds up the development cycle and the time to market for new features and enhancements.

With tools like Docker Compose, teams can define each microservice as a Docker container within a single file and execute multi-container Docker applications on any environment (e.g. testing your services in a staging or testing environment). Your Docker containers can then be deployed to Docker Swarm or Kubernetes for container orchestration, deployment management, and automatic creation and tear down of containers as needed (i.e. scaling). Leveraging Docker in conjunction with Docker Compose provides complete container and microservice integration, as each service is configured and managed in the container ecosystem.

Drinking Our Own Champagne

At eGlobalTech, we’ve developed microservices for multiple clients, including development of new systems and the migration of monolithic applications to lean microservice-powered systems. An example of a recent client success is migrating an existing system with dozens of interdependencies and a monolithic architecture, which made maintenance and upgrades cumbersome. Our team leveraged the Strangler Pattern (a design pattern to migrate legacy architecture components to microservices) approach to develop, test, and deploy dozens of new microservices that powered system messaging, alert aggregation and notification, and data formatting across multiple data formats.  This enabled us to simultaneously test our microservices against the existing services and transition each service to the new microservice without any interruption to users.

Even though microservices aren’t a silver bullet and require thought in the design and implementation of these services, they build a foundational application design that is modular, scalable, and extensible as the system evolves over time.

Stay tuned for Part 2 of this blog series as it will cover some common design techniques.

Contact info@eGlobalTech.com to learn more!

 

Copyright 2019 | eGlobalTech | All rights reserved.