The eGT Blog is a place we created to share our ideas and knowledge about what we know and what we have learned working in technology and cybersecurity in the public sector. We passionately believe in our federal clients’ missions to make America stronger, smarter, and safer. Therefore, we created a place to depart ideas on how to advance our government’s innovation and modernization.

Compare Cloudamatic To Other Cloud Deployment Tools

eGlobalTech’s Cloudamatic (www.cloudamatic.com) is a 100% open source cloud deployment tool developed by our research and development arm, eGT Labs.

While most cloud deployment and migration tools focus on infrastructure, that is only a fraction of what needs to be done. Cloudamatic not only performs infrastructure migrations (including configuration, orchestration, patching, and monitoring with every deployment), but migrates your applications entirely through automation.

See how Cloudamatic compares to its competitors:

Comparison of Cloudamatic to other cloud deployment tools

Techniques for Designing Microservices

,
Two people designing

Part 2 of our Microservices & Containers Series

In our “Part 1: Microservices & Containers – A Happy Union” blog post, we outlined the benefits of microservices and described how to integrate them with containers to enable teams to build and deploy microservices in any environment. In the next phase of our two-part blog series, we explain approaches for defining and designing flexible microservices. When designing microservices, it’s important to ensure that each service is decomposed into a business capability. As microservices follow a lean and focused philosophy, designing microservices around business capabilities ensures no unnecessary features or functionality is designed or built. This reduces project risk, the need to refactor unnecessary code, and reduces the complexity of the overall product. Since microservices are built around business capabilities, it’s critical to have business stakeholders or users participate in the design sessions.

Defining Microservices

It’s tempting to start implementing small services right away and assume when combined, all services will represent a cohesive and modular product. Before diving into the implementation, it’s critical to understand the complete picture of all services and how they interact with one another to avoid feature creep and unnecessary features that don’t meet business needs. An effective approach is to have key technical staff (usually a lead designer, technical lead, and architect) and stakeholders collaborate using event storming. Event storming enables project implementors and domain experts to describe an entire product or system in terms of the things or events that happens. This empowers both the business and technical staff to have complete control of a problem space and design product services using plain and easy-to-understand descriptions rather than technical jargon.

Using post-it notes, the team arrange events in a rough order of how they might happen, although not at first considering in any way how they happen or what technologies or supporting structure might be present in their creation. Events should be self-contained and self-describing with no concern placed on implementation details. When doing this exercise, it’s helpful to draw a causality graph to explore when events occur and in what order. Once all events are documented, the team then explores what could go wrong in the context. This approach helps you ask the question “What events do we need to know?” and helps you identify missing events, which is a powerful technique to help explore boundary conditions and assumptions that might affect real estimates of how complex the software will be to build. When the team feels all events have been adequately documented, the next step is to document user personas, commands, and aggregates.

  • User Personas
    • User personas document the various users that would use the system. Personas help teams understand the goals of the user performing a given action, which is helpful in the design phase.
  • Commands
    • Commands are a user action or external system that caused an event.
  • Aggregate
    • An aggregate receives commands and decides whether to execute them or not, thus producing other events as necessary.

Once all personas, commands, and aggregates are documented, the team can now see “big picture” on how the entire system or product should work to meet all requirements. This approach is excellent when designing microservices as each event or handful of events can be clearly defined for a microservice. The service author creates a service that accommodates only those events, creating lean business capabilities that have a well-defined scope and purpose. Event storming is also great for both technical and non-technical stakeholders, as the entire system is described by its events. This removes barriers for stakeholders to participate in the design process as the technical implementation details are not discussed. This approach works well for an existing system or new application.

Design Techniques for Microservices

Once a team has all their services defined and organized, they can focus on the technical details for each microservice. The implementation details will be specific to a given service, and below are guidelines that will help when building out a microservice:

  • Develop a RESTful API
    • Each microservice needs to have a mechanism for sending and consuming data and to integrate with other services. To ensure a smooth integration, it’s recommended to expose the API with the appropriate functionality and response data and format.
  • Manage Traffic Effectively
    • If a microservice requires the handling of thousands or millions of requests from other services, it will not be able to handle the load and will become ineffective in meeting the needs of other services. We recommend using a messaging and communication service like RabbitMQ or Redis to handle traffic load.
  • Maintain Individual State
    • If it’s necessary for the service to maintain state, then that service can define the database requirements that satisfy its needs. Databases should not be shared across microservices as this goes against the principles of decoupling and database table changes in one microservice could negatively impact another service.
  • Leverage Containers for Deployments
    • As covered in Part 1, we recommend deploying microservices in containers so only a single tool is required (containerization tools like Docker or OpenShift) to deploy an entire system or product.
  • Integrate into the DevSecOps Pipeline
    • It’s important that each microservice maintain their own separate build and be integrated into the overall DevSecOps CI/CD pipeline. This makes it easy to perform automated testing on each individual service and isolate and fix bugs or errors as needed.

How eGlobalTech Can Help You Deploy Microservices

As outlined in Part 1 of our blog series, eGlobalTech has extensive past performance developing and deploying microservices for multiple clients. Our experience includes containerization through Docker and OpenShift, and we have leveraged containers to deploy microservices across many complex applications. Our Technology Solutions group built and integrated microservices on existing legacy applications, developed new applications using microservices, and migrated legacy architectures to complete microservices-driven architectures. If you’d like to discuss how eGlobalTech can help your organization embrace or implement microservices, please email our experts at info@eglobaltech.com!

USPTO Awards Cloud Containerization Contract to eGlobalTech

Image of white paper cloud cutout being held to blue sky

eGlobalTech was awarded a contract by the U.S. Patent and Trademark Office (USPTO), Office of Infrastructure, Engineering and Operations under the Office of the Chief Information Officer, to advance containerization for USPTO’s IT West Lab Environment.

eGlobalTech applied its deep expertise in cloud deployment automation using a wide array of open-source containerization and orchestration technologies such as Docker and Kubernetes to help the USPTO achieve a cloud smart strategy.

Containers will provide the ability for the USPTO to build features and services consistently across the USPTO OCIO. This will achieve cost savings because consistent environments are easy to maintain, containers facilitate faster software delivery, and containers share and use resources more efficiently, reducing the dependency on virtual machines. These benefits are all the result of implementing a cloud smart strategy.

Interested in learning more about our work? Find out more about our past performances here.

eGlobalTech Joins Tetra Tech

,

We’re thrilled to announce eGlobalTech has joined the Tetra Tech family of companies. Tetra Tech is a leading provider of high-end consulting and engineering services for projects worldwide, headquartered in California.

For 15 years we’ve provided our clients with innovative solutions and cutting-edge technologies. Today is a new chapter in our story; this acquisition combines Tetra Tech’s mission expertise with our high-end IT consulting services, providing new and exciting opportunities for both our employees and clients. We look forward to continuing to serve our clients as we join the Tetra Tech family.

Read the full press release from Tetra Tech for more information.

Part 1: Microservices & Containers – A Happy Union

,

Software applications have been following a standard design approach for decades: teams build and configure databases, they implement server-side services and features, and they develop a user interface that makes interactions between their application and users possible. These three main components are usually complex, have many interdependencies, and can be an entire application or system. As applications evolved and software teams experienced attrition over the years, these systems often turned into monoliths that are difficult to maintain and upgrade. Challenges such as “dependency hell” can emerge, where it becomes difficult to track how various components interact and send/consume data to each other. This ultimately makes dependency management a full-time job for teams, as modifying one area of the application can have an unexpected outcome or behavior in another part. Another challenge is adding new features to the application. Having many interdependencies across components can make it challenging to understand the responsibilities of each application component, and where a responsibility begins and ends. This adds an increased burden for teams who are maintaining the application and scaling it for future requirements.

Microservices are a design approach where components of an application are broken down into lightweight, independent services that communicate through Application Programming Interfaces (APIs). They can maintain their own state, manage their own databases, or remain stateless. Microservices focus on solving specific domain or business capabilities and should be granular. This approach comes with many benefits, including:

  • Ensuring a modular design
  • Decreasing the risk of failure in one service impacting another
  • Updating and enhancing services are straightforward and focused
  • Deploying services independently and easily
  • Selecting the technology that best fits the requirements of that service

In contrast, traditional and monolithic applications need to be completely rebuilt and deployed when components change, lose their modular structure over time, require scaling the entire application over individual components, and eliminate flexibility with technology choices. There are many topics to think through when approaching microservices, such as the design of each service and whether you’re migrating a monolithic application or building one from scratch.

The Integration of Microservices & Containers

Containers have become ubiquitous in software development and deployments, and our Federal clients have embraced containers over traditional virtual machines. Containers provide the ability for development teams to build features and services for an application and guarantee that these features will work in every environment – including development, testing, production, and both virtual and physical servers. They ensure a definitive separation between one another while sharing resources, enabling containers to run on the same server, but run in isolation and not impact each other if there’s a technical issue. Containers can also be ephemeral and be created or destroyed easily. This enables teams to easily deploy and test new features in isolation and in any environment without impacting another developer’s workflow or other components of the application.

A container will maintain its own run time environment, tools, databases, and APIs – creating a completely isolated environment for service development. This provides a natural approach for creating and deploying microservices, while incorporating microservice development into a team’s DevSecOps pipeline and workflow. A developer within a team can develop their microservice and use Docker or OpenShift to create a container in seconds to run, test, fix bugs, and deploy their microservice. Once the developer is finished, they can destroy the container instance in seconds with no impact to other team members or other features within the application. This process speeds up the development cycle and the time to market for new features and enhancements.

With tools like Docker Compose, teams can define each microservice as a Docker container within a single file and execute multi-container Docker applications on any environment (e.g. testing your services in a staging or testing environment). Your Docker containers can then be deployed to Docker Swarm or Kubernetes for container orchestration, deployment management, and automatic creation and tear down of containers as needed (i.e. scaling). Leveraging Docker in conjunction with Docker Compose provides complete container and microservice integration, as each service is configured and managed in the container ecosystem.

Drinking Our Own Champagne

At eGlobalTech, we’ve developed microservices for multiple clients, including development of new systems and the migration of monolithic applications to lean microservice-powered systems. An example of a recent client success is migrating an existing system with dozens of interdependencies and a monolithic architecture, which made maintenance and upgrades cumbersome. Our team leveraged the Strangler Pattern (a design pattern to migrate legacy architecture components to microservices) approach to develop, test, and deploy dozens of new microservices that powered system messaging, alert aggregation and notification, and data formatting across multiple data formats.  This enabled us to simultaneously test our microservices against the existing services and transition each service to the new microservice without any interruption to users.

Even though microservices aren’t a silver bullet and require thought in the design and implementation of these services, they build a foundational application design that is modular, scalable, and extensible as the system evolves over time.

Stay tuned for Part 2 of this blog series as it will cover some common design techniques.

Contact info@eGlobalTech.com to learn more!

 

Copyright 2019 | eGlobalTech | All rights reserved.

 

DevSecOps and Espier Achieve Authority to Operate Faster

The ATO Journey & Its Challenges

The Authority to Operate (ATO) is a formal declaration and approval that a software environment is ready to be deployed onto a federal production environment. Achieving an ATO includes a long and careful process that requires evaluating each tool in the technology stack and ensuring that it won’t place the security posture of the environment at risk. Multiple security and vulnerability tests are executed, manual security scans are run, and even the load balancers and F5 need to be configured properly to pass an ATO evaluation. The evaluation can take anywhere from 6-12 months, and sometimes longer depending on the outcome of the initial assessments. As one can imagine, this is a daunting exercise that can have teams scrambling and stressed. Federal programs will sometimes make technological and architectural decisions based on whether it would be easier to achieve the ATO – even if these decisions are not the most optimal or don’t position the technology stack for scalability in the future. Achieving an ATO is one challenge, and then there’s maintaining it. After an ATO, recurring evaluations are performed to ensure the security standards for that federal agency are still being met.

Introducing DevSecOps & Espier

This is where DevSecOps and Espier (an open source plugin for penetration and Open Web Application Security Project [OWASP] testing) come in – the next step in the evolution of DevOps. In DevOps, developers create continuous integration / continuous delivery (CI/CD) pipelines enabling them to build, deploy, and test software with every commit included in the code repository. Code is unit tested, regression tested, validated with each build, and deployed upon a successful test run. DevOps ultimately reduces technical debt, errors, and bugs in each development sprint and makes teams more productive as there’s automation built into each step. Espier integrates security and vulnerability testing into the DevOps process (hence DevSecOps), automatically testing your security posture with each developer commit.

The Value of Espier

Espier is a Jenkins plugin that automatically scans for cross-site scripting attacks, SQL injections, and performs other types of penetration testing. It continuously detects vulnerability issues as part of every software build, allowing developers to incrementally remediate them. At most federal agencies, penetration testing is disconnected from the core development process and is conducted late in the system delivery lifecycle. For an ATO, it’s a requirement that needs to be satisfied early on. Espier is treated as a series of tests that runs alongside your test suite in Jenkins. As it’s encapsulated in Jenkins, Espier supports Docker and deployments to multiple environments. eGT Labs, the innovation engine that created Espier, decided to make a plugin rather than a standalone application to avoid an additional tool insertion, which for federal projects can be a big deal. Adding tools can modify an ATO posture, but if Jenkins is already approved, then Espier can easily be integrated. Plugins are also extensible and easy to install and maintain, and we chose Jenkins as it’s the industry standard tool for CI/CD and DevSecOps. Even if you use SonarQube or a static analysis tool like Fortify in your stack, penetration tests are often overlooked and can be difficult to emulate. Espier is a simple solution that is free, open source, and available to use today.

Contact eGT Labs at eGTLabs@eglobaltech.com to get started with Espier!

 

Copyright 2018 | eGlobalTech | All rights reserved.

Everything-as-a-Service: Have Federal CIOs Found Their Holy Grail?

As cloud services grow in terms of capability and adoption, federal agencies are pondering whether their organizations should entrust their IT to commercial cloud providers, and more significantly, how far should they go in relying on cloud services. The concept of adopting an Everything-as-a-Service (XaaS) model is growing in popularity among industry pundits and writers. But what exactly is XaaS? XaaS is a cloud computing term for practicing a cloud-first policy by adopting cloud-based technologies instead of on-premise solutions. The objective of XaaS is to reduce, as much as possible, the human capital and capital expenditures from IT required to maintain on-premise, hosted applications. XaaS holds the promise of providing maximum IT efficiency and applying more IT dollars in direct support of an organization’s mission.

XaaS-driven organizations seek adoption of cloud services in several forms. In order of potential value (from low to high), Infrastructure-as-a-Service (IaaS), Platform-as-a-Service (PaaS), and Software-as-a-Service (SaaS) have become the main targets for organizations to push as many workloads out of their data centers as possible. In particular, agencies are now focusing on pushing workloads to fully managed environments that include PaaS as SaaS in order to reduce costs. Newer forms of cloud services continuously emerge such as Data-as-a-Service (DaaS), Mobility-as-a-Service (MaaS), and Security-as-a-Service (SECaaS). These digital services have expanded to cover nearly every workload a typical organization might require. As we will see in this blog, all CIOs across all agencies should strongly consider making the move to XaaS to achieve greater responsiveness to their agencies IT needs.

Gaining Greater Efficiencies with Everything-as-a-Service

If we look at IT through the lens of efficiency, we see increased efficiencies as we progress up the cloud service segment ladder including IaaS, PaaS, and SaaS. Each cloud solution represents an improvement, in most respects, to on-premise infrastructure hosting. On-premise hosting places the most burden on an organization because of expenditures like office space, power, uninterruptible power supplies, generators, networking infrastructure, network backbone, industrial-strength air conditioning, server room infrastructure, servers, etc. Simply put, it is very expensive and complex to maintain quality on-premise applications. Performing a cloud transformation to XaaS brings tremendous efficiencies and each of the cloud service types described below provide increasing efficiencies wherever they may be applied. Most importantly, commercial cloud providers possess the talent and resources to ensure the highest degree of security and scalability.

Infrastructure-as-a-Service

IaaS provides a relatively faithful replacement to the physical infrastructure, hardware, networking, and core security services. It provides virtualized implementations of most physical server resources with a nearly limitless supply of virtual machines, networking assets, and security features for any size workload. The major advantage is that with IaaS you can almost instantly provision new virtual networks and machines with a high degree of scalability. IaaS increases your organization’s stability tremendously and is suitable for most types of workloads. The one downside is that consumers still own the costs of managing the configuration and management of the servers from the operating system through application tiers. While this represents a step forward from on-premise hosting, PaaS is a good step forward in efficiency from IaaS.

Platform-as-a-Service

With PaaS, providers offer pre-configured infrastructures, manage the operating system, and also the application platform. The consumer only manages their data and applications, which substantially reduces configuration, operations, and maintenance costs for the consumer. PaaS also provides enhanced scalability, so the consumer is not burdened with complex configuration tasks, patching, and management related expenses. Succinctly put, PaaS improves upon IaaS by providing management of the operating systems, networking infrastructure, and scaling infrastructure so your organization can spend more time engineering the business logic for applications and less time building indirect value through infrastructure management. According to Statista’s Platform as a Service Dossier[1], the recent evolution of PaaS resulted in a very significant 32% growth of the PaaS segment from 2016 to 2017.  PaaS is good, but to maximize cost efficiencies and scalability SaaS is better.

Software-as-a-Service

SaaS holds the potential of being the crème de la crème of XaaS with the highest return on investment (ROI) because the commercial service provider offers a turnkey software solution. When compared to IaaS and PaaS hosted applications, SaaS requires minimum software development and operational support. Consumers need not worry about the hardware, software, configuration, and patching because the service provider manages it all. As a consumer of SaaS offerings, consumers typically only purchase licenses for their employees. Offloading nearly all infrastructure engineering and support responsibilities allow agencies to shift dollars and focus towards pure business automation support and innovative projects that directly support the agency’s mission.

With current XaaS options in the marketplace, agencies have an opportunity to reduce human capital cost and capital expenditures while gaining major efficiencies and ROI. Is XaaS, therefore, going to be the holy grail of IT management for CIOs?

Benefits of XaaS

It is possible with XaaS to instantaneously provision nearly any business service. For example, office automation can be immediately, and cost-effectively, provisioned through products such as Microsoft Office 365 and Google Apps. Within XaaS, the SaaS services market has grown substantially with ever-increasing breadth and depth of services to address common business needs. SaaS services support an overwhelming majority of workloads such as human resources (HR), client relationship management (CRM), and enterprise resource planning (ERP). Services such as Automatic Data Processing (ADP), Salesforce, SugarCRM, Oracle Cloud, and Microsoft Dynamics have become compelling solutions. These services may be instantly provisioned, require little maintenance, feature fault tolerance, reduce redundancy, and offer nearly infinite scalability.

Implementing XaaS is relatively painless and fast for most organizations, you don’t have to spend years building out an IT ecosystem like with traditional IT models. Best of all, XaaS holds the promise of achieving high degrees of efficiency of IT spending to directly support an agency’s mission. Computerworld quotes a Computer Economics study that states “cloud-centric organizations are saving 15% in their IT budgets, mostly due to a reduction in data center costs.” Because of those savings, organizations were increasing investments on new IT projects. Many startup companies adopted a XaaS philosophy, enjoying the benefits of never having built or co-located a server room, hiring expensive infrastructure support personnel, or adopting business processes to existing software.

Funding is also an area where XaaS offers tremendous upside in terms of simplicity. Budgeting for XaaS is generally a question of identifying the number of seats/licenses required for each service and projecting demand throughout the year – it’s highly scalable based on an agency’s demand. The XaaS budgeting model is significantly different from traditional IT procurement, where the government must spend months or years to procure hardware, licensing, storage, equipment, energy, cooling, and other aspects that can be a daunting and expensive task. There is no need with XaaS to track those costs as those expenses are born by the service provider.

For the public and defense sectors, there is an additional compelling aspect of XaaS. Many government IT organizations struggle with meeting Continuity-of-Operations (COOP) requirements due to the technical complexities, the acquisition processes, and the number of agreements that must be executed. Most modern commercial XaaS services all have COOP built in, have multiple “regions” where the solutions operate from, and have an ability to automatically scale across these regions. This helps government IT organizations achieve their COOP requirements with comparatively little effort.

The Potential Difficulties with XaaS

With any IT model, there are pros and cons and it’s important to evaluate whether the pros outweigh the cons or vice versa. Adopting a XaaS strategy in the public sector is not without risks. XaaS is heavily network dependent because in an ideal world all services are accessed via the Internet or private VPN. As such, there will be increased costs for maintaining highly performant and robust networks – although the savings incurred by transforming to XaaS are much greater. Also, implementing a XaaS strategy requires a substantial amount of transformation and may initially have a positive or negative impact on existing business processes. Lastly, there are still perceptions regarding the security and reliability of cloud services, especially with system security managers.

Conclusion

XaaS provides a tremendous opportunity to achieve a leaner, stronger, and more agile government. Agencies will have the ability to rapidly field new capabilities without the conventional delays incumbent in traditional IT hosting approaches or procurement models. Agencies will no longer need a dedicated physical infrastructure on premise for IT hosting. Due to the significant ROI achieved, agencies will be able to transition IT support staff from maintaining legacy systems to building valuable and innovative solutions that directly impact the mission. The complexity of funding is reduced, and IT budget forecasting is simplified. With proper planning and adherence to FedRAMP accreditation, the process of transforming to XaaS will lead to more secure, scalable solutions with greater continuity. Agencies that opt out of XaaS will be at a disadvantage and continue to incur unnecessary expenses to maintain their IT infrastructure. If implemented and managed correctly, Federal CIOs may have, after all the years of searching, found the holy grail for IT management!

 

 Contact us at info@eglobaltech.com for more details about how XaaS can transform your organization.

 

Copyright 2018 | eGlobalTech | All rights reserved.

Succeeding in a Marriage of DevOps and Security: 5 Tips for a Happy and Prosperous Union

Red hand holding a blue hand intertwined in a DevOps shape

Why DevOps and Cybersecurity need to Marry

It is well known that modernization through DevOps and cybersecurity are the two biggest IT challenges facing the federal government today—but how can federal CIOs rapidly address these issues simultaneously? Cybersecurity has historically operated as an independent entity with an emphasis on achieving compliance as opposed to engineering next-generation security. Development and operations teams typically viewed cybersecurity as a hindrance because it impacts their ability to adopt new technologies and slows down their process. In many cases, cybersecurity is the last stage of the development lifecycle where long lists of security problems and compliance issues are analyzed, documented, and sit waiting for remediation. By definition, DevOps does not imply exclusion of security or any facet of software development and delivery process. However, for federal institutions, the role of security needs to be exemplified and can no longer be an afterthought in the DevOps process.

By explicitly fusing security into the DevOps model and calling it DevSecOps, we are declaring a union that will collectively collaborate and deliver secure, high-performing systems. DevSecOps interjects security into the foundation of software development from day one. DevSecOps provides an integrated approach to unifying teams, technologies, and processes for faster, more robust, and secure products.

DevSecOps teams with multiple cross-functional capabilities can work together to address the two crucial objectives of any organization—increase the rate of releases and maximize security protection. This model generates a wealth of learning and experiential practices that can cost-effectively accelerate the pace of federal system modernization initiatives.

Keys to a Successful Marriage of DevOps & Cybersecurity

Uniting development, operations, and security in a DevSecOps model to deliver modern and secure technology solutions is not an easy endeavor. An effective marriage between the two requires the following five building blocks:

  1. Build trust early – empathy, collaboration, and communication

DevSecOps teams are not common yet, so you may be assembling a team that does not have experience working in this type of model. DevOps team members may have preconceived notions of the security team members and vice versa. It is imperative to build trust among team members early through collaboration and communication. The team needs to buy-in and develop a sense of empathy for their team members’ concerns. With this, they can collectively work towards an innovative solution using the most appropriate technology and designed for security from inception. This includes instrumenting security controls into the architecture, analyzing it for vulnerabilities, and addressing them as part of the development process. Security issues and tasks should be tracked in the same common product backlog and prioritized along with feature stories prior to each sprint.

  1. Come together and establish a common process framework

A common process framework should be established by unifying and aligning the security governance risk and compliance process established by the NIST Risk Management Framework and organizational system development lifecycle. A security engineer should be dedicated to this effort and participate from inception. From architecture design, to active development, to testing and operations—the security engineer actively collaborates with peers to integrate and implement secure engineering and design practices. This facilitates collaboration and an investment in each other’s success—collectively they succeed or fail and never point fingers.

  1. Be kind to your partner – commit to collaborate

A DevSecOps team has many diverse players with different goals and responsibilities. It is important to first rally the team together to work toward a common goal using all their expertise collaboratively. Next, it is important to learn, understand, and appreciate each other’s concerns, therefore eliminating the “no, it is not possible” approach. It is critical that team members understand each other’s challenges and jointly explore and provide alternative approaches to reach the best possible solution.

  1. Simplify life – automate repetitive tasks

DevOps teams automate through a CI/CD pipeline. Cybersecurity engineers should integrate into this pipeline, automating tasks that can range from security testing, to monitoring, and possibly documentation. By adhering to a common CI/CD pipeline, everyone is forced to get on the same page viewing a unified software release quality report that lays bare facts on security failures and software quality. This eliminates any chances for misinterpretation or misunderstanding.

eGT Labs®, the research and development arm of eGlobalTech (eGT), developed Espier®. This security tool automates and integrates security penetration testing as part of the CI/CD pipeline, enabling early detection and faster remediation of vulnerabilities while ensuring that only secure code is deployed.

  1. Keep the spark alive – continuously learn and evaluate emerging tools and technologies for adoption

DevSecOps teams need to keep the spark alive by continuously innovating. Developers need to stay ahead of the curve and produce the most cutting-edge solutions, while security experts need to make sure they are evaluating and securing those new technologies. While this is not a simple task for any side of the house, it is a challenge that will keep the marriage exciting. Many innovations should be in the form of automation for both development and security processes. On the security compliance front, composable security documentation practices are becoming viable and practical. It replaces a document-heavy compliance process into more machine-generated data construct that is a true factual depiction of security posture, as opposed to only human written opinions. By embracing these emerging practices, DevSecOps teams can achieve process efficiencies and deliver secure software faster.

Case Study

A public sector eGT client had a complex geospatial system prototype composed of Microsoft and open source applications with a growing number of ArcGIS services. This prototype was used in a production capacity and encountered frequent outages, as well as performance and security issues. eGT was engaged to transform and migrate this ecosystem to the cloud.

eGT applied our DevOps Factory® framework to re-engineer the target architecture, implement the security-first design, and automate the end-to-end cloud migration process onto our managed AWS infrastructure. By collaborating with the security organization and instrumenting security controls into the architecture from inception, we successfully passed all required security audits with no major POAMs and achieved full Authority to Operate (ATO) within four months.

Leveraging Cloudamatic®, an open source cloud-orchestration framework developed by eGT Labs, we instrumented security and operational monitoring and log aggregation as part of the automated deployment process. This enabled our team to proactively detect security threats and respond quickly by patching cloud environments composed of hundreds of instances running both open source and commercial software in a matter of minutes.

DevSecOps: Happily Ever After

The needs to “accelerate time to market” and “maximize application and information protection” are not diametrically opposed to each other. They are two conjoined business requirements for any agency that desires to successfully thrive in today’s digital world. Government CIOs can achieve this by uniting teams in a DevSecOps model using best practices to ensure a strong and successful team culture.

Please contact devopsfactory@eglobaltech.com to learn more about how eGlobalTech practices DevSecOps in system modernization initiatives at DHS, HHS, and other civilian agencies.

 

Copyright 2018 | eGlobalTech | All rights reserved.

Serverless Computing: The Next Cloud Evolution

Servers that look like DNA strains

Cloud Genetics Are Mutating

Throughout evolution, there is usually a period where certain traits of an organism make little sense and become vestiges of the past, while certain genetic mutations make the organism stronger and more competitive. There is a modern example in the form of web applications. Most cloud-hosted web applications reside on dedicated virtual servers racking up charges, using electricity, and hardware resources whether they are in use or not. These applications typically are installed on virtual machines which must be installed, configured, maintained, and patched by developers and operations staff. These solutions do not seamlessly scale based on usage without some human intervention. In a world of software-defined everything, developing applications and deploying them to fixed virtual servers no longer makes sense for many low to moderate intensity application workloads. However, a new genetic evolution has emerged making web applications more scalable and efficient than ever before. Web application genetics have started to mutate with event-driven development and serverless computing as new capabilities. Serverless computing takes Platform as a Service (PaaS) to a new level by completely abstracting the server configurations from the developer, allowing them to focus on the solution rather than the underlying infrastructure. Event-driven development allows code to stay completely dormant until an application service is requested by a user.

Serverless computing manages all creation, scaling, updating, and security patching of underlying server resources. It also abstracts nearly all server details from the developer to eliminate the need for creation of virtual machines, software installation, security patching, advanced configuration, and maintenance. With serverless computing, developers simply create a serverless service and, instead of receiving a new virtual server, they receive a service endpoint and credentials. The service manages provisioning, autoscaling, maintenance, and updates to keep services running continuously. Developers use these endpoints with the same tools and development environments they use today to build clean, elegant serverless solutions.

To capitalize on this efficiency, web applications are increasingly evolving towards serverless computing using advanced serverless components such as Amazon Web Services (AWS) Lambda, Azure Functions, and Google Cloud Functions. Each of these serverless computing environments combines event-driven architecture with serverless computing. It creates incredibly efficient service environments capable of rapidly provisioning on demand to meet user requirements and then scale back down to reduce costs. Serverless database services (AWS Aurora, SQL Azure, Google Cloud Database) are evolving.

Aspects of the cloud’s genetics started to mutate into serverless computing, an architecture model where code execution is completely managed by the cloud provider. Developers now don’t have to manage, provision, and maintain servers when deploying code. They don’t have to define how much storage and database capacity is required pre-deployment, hence reducing the time to production.

How Will Cloud Computing Genes Diversify for Better Efficiency 

AWS Lambda, introduced in 2014, provided an event-driven compute service that lets code run without provisioning or managing servers, essentially executing code on demand. It has since been joined by Azure Functions and Google Cloud Functions. These servers are known as Function as a Service (FaaS), where you pay as you go for execution only and not for underlying resource usage. With AWS Lambda, your service activates when triggered by a request from the client to execute code and when the execution is complete, the resources will be scaled back. Serverless functions are most effective with code that executes for relatively short periods – typically less than 5 minutes. They can be configured to scale automatically, enabling large workloads to make quick work of larger workloads. With serverless design, dormant services are scaled down and don’t run up the bill.

Is the Serverless Computing Genetic Evolution Compatible With Any Tech Ecosystem?

Continuing with our genetic evolution metaphor, any genetic evolution will only survive if it benefits an organism and, in some way, strengthens its survival chances. Similarly, serverless computing must strengthen applications without making serverless applications incompatible with its ecosystem. While all signs are promising, developers and architects are still trying to understand all the opportunities and challenges associated with their initial findings. Serverless computing offers increased productivity while reducing costs, however, it has practical and real-world challenges. A few of those issues include:

  • The technology behind serverless computing is still nascent, so there is limited documentation on ideal usage and best practices. In some cases, it is trial and error for teams utilizing the technology.
  • Serverless technology may require rework of some code. Code that is stateful and that does not follow common web application best practices would have to be rewritten.
  • The programs supporting serverless computing are not yet approved by FedRAMP, however, the approval will happen soon.

Is Serverless Computing Working in the Federal Government?

eGlobalTech is implementing serverless architecture with its Federal engagements. Recently, we collaborated with the Federal Emergency Management Agency (FEMA) to migrate a mission-critical system to an AWS cloud platform. We are now modernizing portions of the system which were identified as good candidates to go event-driven and serverless which will provide massive scalability improvements over the existing solution.

Contact us today at info@eglobaltech.com, if you need help strategizing and implementing your Big Data project. 

 

Copyright 2018 | eGlobalTech | All rights reserved.

 

Big Data is Not Technology-Focused

Big Data Infographic

The business case for leveraging Big Data is discussed extensively in conferences and publications worldwide – and it could not be clearer – it provides instant insight for better decision making.  As a result, Big Data is on the mind of every business and technology leader from almost every industry sector imaginable, especially the public sector. The Social Security Administration (SSA) uses big data to better identify suspected fraudulent claims and the Securities Exchange Commission (SEC) applies big data to identify nefarious trading activity. Many other Federal agencies are looking at ways to use Big Data to help execute their mission areas.  In addition, a growing market of tools and techniques is now available to help Federal agencies effectively analyze large volumes of disparate, complex, and variable data in order to draw actionable conclusions.  Nevertheless, many of these same leaders are challenged in terms of successfully realizing the benefits Big Data has to offer, mainly because they:

  • Believe it is revolutionary and technology-focused, rather than an iterative and cyclical process.
  • Cannot determine the value of the data available to them. This challenge is multiplied when you consider that this data is consistently and rapidly expanding in terms of volume, variety, and velocity (in part, due to factors such as IoT, social media, video and audio files).
  • Are concerned about security and privacy issues.

Overcoming Big Data Challenges

Although these challenges can seem overwhelming, they can be overcome through a methodical process that is focused on improving your agency’s mission performance. They key is to adopt a use case-driven approach to determine how and when to begin your Big Data migration. Assuming your organization has already developed a Big Data strategy and governance framework, this iterative approach begins with your business (non-IT) stakeholders who support your organization’s primary or core functions. The objective is to use their institutional knowledge to define and prioritize a set of business needs/gaps which would improve their ability to perform their jobs more effectively. Once defined, the march towards Big Data begins in an iterative and phased manner.

Implementing Big Data in an Iterative and Phased Approach

Following the prioritized order, each business need should be decomposed into a use case (including items such as process flows, actors, and impacted IT systems). This decomposition will also help to facilitate the identification of all available data assets which touch upon the use case (private and public). In doing so, it is critical to brainstorm as broadly as possible. If a municipal government was trying to analyze how weather conditions impact downtown traffic patterns, they might use data from local weather reports, traffic camera feeds, social media reports, road condition reports, 911 calls reporting vehicular accidents, maintenance schedules, traffic signal times, etc. – not just the obvious traffic and weather reports.  Each data asset should be assessed for quality (to determine if cleanup is required) and mapped to its primary data source.

Securing Big Data and Establishing Privacy Standards

Now that we have a handle on the data and the data sources that will be analyzed to support our use case, defining the proper security and privacy requirements for the Big Data analytics solution should be more straightforward (rather than defining a broad set of security controls for a Big Data environment in general). These requirements should be based upon the data assets with the highest sensitivity level, as well as the sensitivity levels of any analysis results that might be derived from various combinations of the data assets. The requirements will, in turn, enable the definition of the appropriate security controls that need to be integrated into any solution to reduce risk and prevent possible breaches.

Selecting the Right Big Data Solution

At this point, you are ready to start considering technology solutions to perform your Big Data analysis (such as the analytics, visualization, and warehousing, and tools). The technology you select, including infrastructure, core solutions, and any accelerators, should be based upon the information collected up to this point, including security requirements, data structures (structured vs. semi-structured vs. unstructured), and data volume. Remember, no single technology is required for a Big Data Solution, rather it should be based on specific requirements. Data scientists can then use these technologies to develop algorithms to process the data and interpret the results. Once completed, you should move on to the next use case.

In summary, it is essential to remember that communication, change management, and governance are key to successfully derive any meaningful and usable results from Big Data.  Other key success factors include:

  • Do not start with a technology focus. Instead, concentrate on business/mission requirements that cannot be addressed using traditional data analysis techniques.
  • Augment existing IT investments to address initial use cases first, then scale to support future deployments.
  • After the initial deployment, expand to adjacent use cases, building out a more robust and unified set of core technical capabilities.

These factors will ensure your agency adopts Big Data securely and effectively, achieving results at each iterative step and maximizing the use of your valuable resources.

Contact us today at info@eglobaltech.com, if you need help strategizing and implementing your Big Data project. 

 

Copyright 2018 | eGlobalTech | All rights reserved.