The eGT Blog is a place we created to share our ideas and knowledge about what we know and what we have learned working in technology and cybersecurity in the public sector. We passionately believe in our federal clients’ missions to make America stronger, smarter, and safer. Therefore, we created a place to depart ideas on how to advance our government’s innovation and modernization.

Part 1: Microservices & Containers – A Happy Union


Software applications have been following a standard design approach for decades: teams build and configure databases, they implement server-side services and features, and they develop a user interface that makes interactions between their application and users possible. These three main components are usually complex, have many interdependencies, and can be an entire application or system. As applications evolved and software teams experienced attrition over the years, these systems often turned into monoliths that are difficult to maintain and upgrade. Challenges such as “dependency hell” can emerge, where it becomes difficult to track how various components interact and send/consume data to each other. This ultimately makes dependency management a full-time job for teams, as modifying one area of the application can have an unexpected outcome or behavior in another part. Another challenge is adding new features to the application. Having many interdependencies across components can make it challenging to understand the responsibilities of each application component, and where a responsibility begins and ends. This adds an increased burden for teams who are maintaining the application and scaling it for future requirements.

Microservices are a design approach where components of an application are broken down into lightweight, independent services that communicate through Application Programming Interfaces (APIs). They can maintain their own state, manage their own databases, or remain stateless. Microservices focus on solving specific domain or business capabilities and should be granular. This approach comes with many benefits, including:

  • Ensuring a modular design
  • Decreasing the risk of failure in one service impacting another
  • Updating and enhancing services are straightforward and focused
  • Deploying services independently and easily
  • Selecting the technology that best fits the requirements of that service

In contrast, traditional and monolithic applications need to be completely rebuilt and deployed when components change, lose their modular structure over time, require scaling the entire application over individual components, and eliminate flexibility with technology choices. There are many topics to think through when approaching microservices, such as the design of each service and whether you’re migrating a monolithic application or building one from scratch.

The Integration of Microservices & Containers

Containers have become ubiquitous in software development and deployments, and our Federal clients have embraced containers over traditional virtual machines. Containers provide the ability for development teams to build features and services for an application and guarantee that these features will work in every environment – including development, testing, production, and both virtual and physical servers. They ensure a definitive separation between one another while sharing resources, enabling containers to run on the same server, but run in isolation and not impact each other if there’s a technical issue. Containers can also be ephemeral and be created or destroyed easily. This enables teams to easily deploy and test new features in isolation and in any environment without impacting another developer’s workflow or other components of the application.

A container will maintain its own run time environment, tools, databases, and APIs – creating a completely isolated environment for service development. This provides a natural approach for creating and deploying microservices, while incorporating microservice development into a team’s DevSecOps pipeline and workflow. A developer within a team can develop their microservice and use Docker or OpenShift to create a container in seconds to run, test, fix bugs, and deploy their microservice. Once the developer is finished, they can destroy the container instance in seconds with no impact to other team members or other features within the application. This process speeds up the development cycle and the time to market for new features and enhancements.

With tools like Docker Compose, teams can define each microservice as a Docker container within a single file and execute multi-container Docker applications on any environment (e.g. testing your services in a staging or testing environment). Your Docker containers can then be deployed to Docker Swarm or Kubernetes for container orchestration, deployment management, and automatic creation and tear down of containers as needed (i.e. scaling). Leveraging Docker in conjunction with Docker Compose provides complete container and microservice integration, as each service is configured and managed in the container ecosystem.

Drinking Our Own Champagne

At eGlobalTech, we’ve developed microservices for multiple clients, including development of new systems and the migration of monolithic applications to lean microservice-powered systems. An example of a recent client success is migrating an existing system with dozens of interdependencies and a monolithic architecture, which made maintenance and upgrades cumbersome. Our team leveraged the Strangler Pattern (a design pattern to migrate legacy architecture components to microservices) approach to develop, test, and deploy dozens of new microservices that powered system messaging, alert aggregation and notification, and data formatting across multiple data formats.  This enabled us to simultaneously test our microservices against the existing services and transition each service to the new microservice without any interruption to users.

Even though microservices aren’t a silver bullet and require thought in the design and implementation of these services, they build a foundational application design that is modular, scalable, and extensible as the system evolves over time.

Stay tuned for Part 2 of this blog series as it will cover some common design techniques.

Contact to learn more!


Copyright 2019 | eGlobalTech | All rights reserved.


DevSecOps and Espier Achieve Authority to Operate Faster

The ATO Journey & Its Challenges

The Authority to Operate (ATO) is a formal declaration and approval that a software environment is ready to be deployed onto a federal production environment. Achieving an ATO includes a long and careful process that requires evaluating each tool in the technology stack and ensuring that it won’t place the security posture of the environment at risk. Multiple security and vulnerability tests are executed, manual security scans are run, and even the load balancers and F5 need to be configured properly to pass an ATO evaluation. The evaluation can take anywhere from 6-12 months, and sometimes longer depending on the outcome of the initial assessments. As one can imagine, this is a daunting exercise that can have teams scrambling and stressed. Federal programs will sometimes make technological and architectural decisions based on whether it would be easier to achieve the ATO – even if these decisions are not the most optimal or don’t position the technology stack for scalability in the future. Achieving an ATO is one challenge, and then there’s maintaining it. After an ATO, recurring evaluations are performed to ensure the security standards for that federal agency are still being met.

Introducing DevSecOps & Espier

This is where DevSecOps and Espier (an open source plugin for penetration and Open Web Application Security Project [OWASP] testing) come in – the next step in the evolution of DevOps. In DevOps, developers create continuous integration / continuous delivery (CI/CD) pipelines enabling them to build, deploy, and test software with every commit included in the code repository. Code is unit tested, regression tested, validated with each build, and deployed upon a successful test run. DevOps ultimately reduces technical debt, errors, and bugs in each development sprint and makes teams more productive as there’s automation built into each step. Espier integrates security and vulnerability testing into the DevOps process (hence DevSecOps), automatically testing your security posture with each developer commit.

The Value of Espier

Espier is a Jenkins plugin that automatically scans for cross-site scripting attacks, SQL injections, and performs other types of penetration testing. It continuously detects vulnerability issues as part of every software build, allowing developers to incrementally remediate them. At most federal agencies, penetration testing is disconnected from the core development process and is conducted late in the system delivery lifecycle. For an ATO, it’s a requirement that needs to be satisfied early on. Espier is treated as a series of tests that runs alongside your test suite in Jenkins. As it’s encapsulated in Jenkins, Espier supports Docker and deployments to multiple environments. eGT Labs, the innovation engine that created Espier, decided to make a plugin rather than a standalone application to avoid an additional tool insertion, which for federal projects can be a big deal. Adding tools can modify an ATO posture, but if Jenkins is already approved, then Espier can easily be integrated. Plugins are also extensible and easy to install and maintain, and we chose Jenkins as it’s the industry standard tool for CI/CD and DevSecOps. Even if you use SonarQube or a static analysis tool like Fortify in your stack, penetration tests are often overlooked and can be difficult to emulate. Espier is a simple solution that is free, open source, and available to use today.

Contact eGT Labs at to get started with Espier!


Copyright 2018 | eGlobalTech | All rights reserved.

Everything-as-a-Service: Have Federal CIOs Found Their Holy Grail?

As cloud services grow in terms of capability and adoption, federal agencies are pondering whether their organizations should entrust their IT to commercial cloud providers, and more significantly, how far should they go in relying on cloud services. The concept of adopting an Everything-as-a-Service (XaaS) model is growing in popularity among industry pundits and writers. But what exactly is XaaS? XaaS is a cloud computing term for practicing a cloud-first policy by adopting cloud-based technologies instead of on-premise solutions. The objective of XaaS is to reduce, as much as possible, the human capital and capital expenditures from IT required to maintain on-premise, hosted applications. XaaS holds the promise of providing maximum IT efficiency and applying more IT dollars in direct support of an organization’s mission.

XaaS-driven organizations seek adoption of cloud services in several forms. In order of potential value (from low to high), Infrastructure-as-a-Service (IaaS), Platform-as-a-Service (PaaS), and Software-as-a-Service (SaaS) have become the main targets for organizations to push as many workloads out of their data centers as possible. In particular, agencies are now focusing on pushing workloads to fully managed environments that include PaaS as SaaS in order to reduce costs. Newer forms of cloud services continuously emerge such as Data-as-a-Service (DaaS), Mobility-as-a-Service (MaaS), and Security-as-a-Service (SECaaS). These digital services have expanded to cover nearly every workload a typical organization might require. As we will see in this blog, all CIOs across all agencies should strongly consider making the move to XaaS to achieve greater responsiveness to their agencies IT needs.

Gaining Greater Efficiencies with Everything-as-a-Service

If we look at IT through the lens of efficiency, we see increased efficiencies as we progress up the cloud service segment ladder including IaaS, PaaS, and SaaS. Each cloud solution represents an improvement, in most respects, to on-premise infrastructure hosting. On-premise hosting places the most burden on an organization because of expenditures like office space, power, uninterruptible power supplies, generators, networking infrastructure, network backbone, industrial-strength air conditioning, server room infrastructure, servers, etc. Simply put, it is very expensive and complex to maintain quality on-premise applications. Performing a cloud transformation to XaaS brings tremendous efficiencies and each of the cloud service types described below provide increasing efficiencies wherever they may be applied. Most importantly, commercial cloud providers possess the talent and resources to ensure the highest degree of security and scalability.


IaaS provides a relatively faithful replacement to the physical infrastructure, hardware, networking, and core security services. It provides virtualized implementations of most physical server resources with a nearly limitless supply of virtual machines, networking assets, and security features for any size workload. The major advantage is that with IaaS you can almost instantly provision new virtual networks and machines with a high degree of scalability. IaaS increases your organization’s stability tremendously and is suitable for most types of workloads. The one downside is that consumers still own the costs of managing the configuration and management of the servers from the operating system through application tiers. While this represents a step forward from on-premise hosting, PaaS is a good step forward in efficiency from IaaS.


With PaaS, providers offer pre-configured infrastructures, manage the operating system, and also the application platform. The consumer only manages their data and applications, which substantially reduces configuration, operations, and maintenance costs for the consumer. PaaS also provides enhanced scalability, so the consumer is not burdened with complex configuration tasks, patching, and management related expenses. Succinctly put, PaaS improves upon IaaS by providing management of the operating systems, networking infrastructure, and scaling infrastructure so your organization can spend more time engineering the business logic for applications and less time building indirect value through infrastructure management. According to Statista’s Platform as a Service Dossier[1], the recent evolution of PaaS resulted in a very significant 32% growth of the PaaS segment from 2016 to 2017.  PaaS is good, but to maximize cost efficiencies and scalability SaaS is better.


SaaS holds the potential of being the crème de la crème of XaaS with the highest return on investment (ROI) because the commercial service provider offers a turnkey software solution. When compared to IaaS and PaaS hosted applications, SaaS requires minimum software development and operational support. Consumers need not worry about the hardware, software, configuration, and patching because the service provider manages it all. As a consumer of SaaS offerings, consumers typically only purchase licenses for their employees. Offloading nearly all infrastructure engineering and support responsibilities allow agencies to shift dollars and focus towards pure business automation support and innovative projects that directly support the agency’s mission.

With current XaaS options in the marketplace, agencies have an opportunity to reduce human capital cost and capital expenditures while gaining major efficiencies and ROI. Is XaaS, therefore, going to be the holy grail of IT management for CIOs?

Benefits of XaaS

It is possible with XaaS to instantaneously provision nearly any business service. For example, office automation can be immediately, and cost-effectively, provisioned through products such as Microsoft Office 365 and Google Apps. Within XaaS, the SaaS services market has grown substantially with ever-increasing breadth and depth of services to address common business needs. SaaS services support an overwhelming majority of workloads such as human resources (HR), client relationship management (CRM), and enterprise resource planning (ERP). Services such as Automatic Data Processing (ADP), Salesforce, SugarCRM, Oracle Cloud, and Microsoft Dynamics have become compelling solutions. These services may be instantly provisioned, require little maintenance, feature fault tolerance, reduce redundancy, and offer nearly infinite scalability.

Implementing XaaS is relatively painless and fast for most organizations, you don’t have to spend years building out an IT ecosystem like with traditional IT models. Best of all, XaaS holds the promise of achieving high degrees of efficiency of IT spending to directly support an agency’s mission. Computerworld quotes a Computer Economics study that states “cloud-centric organizations are saving 15% in their IT budgets, mostly due to a reduction in data center costs.” Because of those savings, organizations were increasing investments on new IT projects. Many startup companies adopted a XaaS philosophy, enjoying the benefits of never having built or co-located a server room, hiring expensive infrastructure support personnel, or adopting business processes to existing software.

Funding is also an area where XaaS offers tremendous upside in terms of simplicity. Budgeting for XaaS is generally a question of identifying the number of seats/licenses required for each service and projecting demand throughout the year – it’s highly scalable based on an agency’s demand. The XaaS budgeting model is significantly different from traditional IT procurement, where the government must spend months or years to procure hardware, licensing, storage, equipment, energy, cooling, and other aspects that can be a daunting and expensive task. There is no need with XaaS to track those costs as those expenses are born by the service provider.

For the public and defense sectors, there is an additional compelling aspect of XaaS. Many government IT organizations struggle with meeting Continuity-of-Operations (COOP) requirements due to the technical complexities, the acquisition processes, and the number of agreements that must be executed. Most modern commercial XaaS services all have COOP built in, have multiple “regions” where the solutions operate from, and have an ability to automatically scale across these regions. This helps government IT organizations achieve their COOP requirements with comparatively little effort.

The Potential Difficulties with XaaS

With any IT model, there are pros and cons and it’s important to evaluate whether the pros outweigh the cons or vice versa. Adopting a XaaS strategy in the public sector is not without risks. XaaS is heavily network dependent because in an ideal world all services are accessed via the Internet or private VPN. As such, there will be increased costs for maintaining highly performant and robust networks – although the savings incurred by transforming to XaaS are much greater. Also, implementing a XaaS strategy requires a substantial amount of transformation and may initially have a positive or negative impact on existing business processes. Lastly, there are still perceptions regarding the security and reliability of cloud services, especially with system security managers.


XaaS provides a tremendous opportunity to achieve a leaner, stronger, and more agile government. Agencies will have the ability to rapidly field new capabilities without the conventional delays incumbent in traditional IT hosting approaches or procurement models. Agencies will no longer need a dedicated physical infrastructure on premise for IT hosting. Due to the significant ROI achieved, agencies will be able to transition IT support staff from maintaining legacy systems to building valuable and innovative solutions that directly impact the mission. The complexity of funding is reduced, and IT budget forecasting is simplified. With proper planning and adherence to FedRAMP accreditation, the process of transforming to XaaS will lead to more secure, scalable solutions with greater continuity. Agencies that opt out of XaaS will be at a disadvantage and continue to incur unnecessary expenses to maintain their IT infrastructure. If implemented and managed correctly, Federal CIOs may have, after all the years of searching, found the holy grail for IT management!


 Contact us at for more details about how XaaS can transform your organization.


Copyright 2018 | eGlobalTech | All rights reserved.

Succeeding in a Marriage of DevOps and Security: 5 Tips for a Happy and Prosperous Union

Red hand holding a blue hand intertwined in a DevOps shape

Why DevOps and Cybersecurity need to Marry

It is well known that modernization through DevOps and cybersecurity are the two biggest IT challenges facing the federal government today—but how can federal CIOs rapidly address these issues simultaneously? Cybersecurity has historically operated as an independent entity with an emphasis on achieving compliance as opposed to engineering next-generation security. Development and operations teams typically viewed cybersecurity as a hindrance because it impacts their ability to adopt new technologies and slows down their process. In many cases, cybersecurity is the last stage of the development lifecycle where long lists of security problems and compliance issues are analyzed, documented, and sit waiting for remediation. By definition, DevOps does not imply exclusion of security or any facet of software development and delivery process. However, for federal institutions, the role of security needs to be exemplified and can no longer be an afterthought in the DevOps process.

By explicitly fusing security into the DevOps model and calling it DevSecOps, we are declaring a union that will collectively collaborate and deliver secure, high-performing systems. DevSecOps interjects security into the foundation of software development from day one. DevSecOps provides an integrated approach to unifying teams, technologies, and processes for faster, more robust, and secure products.

DevSecOps teams with multiple cross-functional capabilities can work together to address the two crucial objectives of any organization—increase the rate of releases and maximize security protection. This model generates a wealth of learning and experiential practices that can cost-effectively accelerate the pace of federal system modernization initiatives.

Keys to a Successful Marriage of DevOps & Cybersecurity

Uniting development, operations, and security in a DevSecOps model to deliver modern and secure technology solutions is not an easy endeavor. An effective marriage between the two requires the following five building blocks:

  1. Build trust early – empathy, collaboration, and communication

DevSecOps teams are not common yet, so you may be assembling a team that does not have experience working in this type of model. DevOps team members may have preconceived notions of the security team members and vice versa. It is imperative to build trust among team members early through collaboration and communication. The team needs to buy-in and develop a sense of empathy for their team members’ concerns. With this, they can collectively work towards an innovative solution using the most appropriate technology and designed for security from inception. This includes instrumenting security controls into the architecture, analyzing it for vulnerabilities, and addressing them as part of the development process. Security issues and tasks should be tracked in the same common product backlog and prioritized along with feature stories prior to each sprint.

  1. Come together and establish a common process framework

A common process framework should be established by unifying and aligning the security governance risk and compliance process established by the NIST Risk Management Framework and organizational system development lifecycle. A security engineer should be dedicated to this effort and participate from inception. From architecture design, to active development, to testing and operations—the security engineer actively collaborates with peers to integrate and implement secure engineering and design practices. This facilitates collaboration and an investment in each other’s success—collectively they succeed or fail and never point fingers.

  1. Be kind to your partner – commit to collaborate

A DevSecOps team has many diverse players with different goals and responsibilities. It is important to first rally the team together to work toward a common goal using all their expertise collaboratively. Next, it is important to learn, understand, and appreciate each other’s concerns, therefore eliminating the “no, it is not possible” approach. It is critical that team members understand each other’s challenges and jointly explore and provide alternative approaches to reach the best possible solution.

  1. Simplify life – automate repetitive tasks

DevOps teams automate through a CI/CD pipeline. Cybersecurity engineers should integrate into this pipeline, automating tasks that can range from security testing, to monitoring, and possibly documentation. By adhering to a common CI/CD pipeline, everyone is forced to get on the same page viewing a unified software release quality report that lays bare facts on security failures and software quality. This eliminates any chances for misinterpretation or misunderstanding.

eGT Labs®, the research and development arm of eGlobalTech (eGT), developed Espier®. This security tool automates and integrates security penetration testing as part of the CI/CD pipeline, enabling early detection and faster remediation of vulnerabilities while ensuring that only secure code is deployed.

  1. Keep the spark alive – continuously learn and evaluate emerging tools and technologies for adoption

DevSecOps teams need to keep the spark alive by continuously innovating. Developers need to stay ahead of the curve and produce the most cutting-edge solutions, while security experts need to make sure they are evaluating and securing those new technologies. While this is not a simple task for any side of the house, it is a challenge that will keep the marriage exciting. Many innovations should be in the form of automation for both development and security processes. On the security compliance front, composable security documentation practices are becoming viable and practical. It replaces a document-heavy compliance process into more machine-generated data construct that is a true factual depiction of security posture, as opposed to only human written opinions. By embracing these emerging practices, DevSecOps teams can achieve process efficiencies and deliver secure software faster.

Case Study

A public sector eGT client had a complex geospatial system prototype composed of Microsoft and open source applications with a growing number of ArcGIS services. This prototype was used in a production capacity and encountered frequent outages, as well as performance and security issues. eGT was engaged to transform and migrate this ecosystem to the cloud.

eGT applied our DevOps Factory® framework to re-engineer the target architecture, implement the security-first design, and automate the end-to-end cloud migration process onto our managed AWS infrastructure. By collaborating with the security organization and instrumenting security controls into the architecture from inception, we successfully passed all required security audits with no major POAMs and achieved full Authority to Operate (ATO) within four months.

Leveraging Cloudamatic®, an open source cloud-orchestration framework developed by eGT Labs, we instrumented security and operational monitoring and log aggregation as part of the automated deployment process. This enabled our team to proactively detect security threats and respond quickly by patching cloud environments composed of hundreds of instances running both open source and commercial software in a matter of minutes.

DevSecOps: Happily Ever After

The needs to “accelerate time to market” and “maximize application and information protection” are not diametrically opposed to each other. They are two conjoined business requirements for any agency that desires to successfully thrive in today’s digital world. Government CIOs can achieve this by uniting teams in a DevSecOps model using best practices to ensure a strong and successful team culture.

Please contact to learn more about how eGlobalTech practices DevSecOps in system modernization initiatives at DHS, HHS, and other civilian agencies.


Copyright 2018 | eGlobalTech | All rights reserved.

Serverless Computing: The Next Cloud Evolution

Servers that look like DNA strains

Cloud Genetics Are Mutating

Throughout evolution, there is usually a period where certain traits of an organism make little sense and become vestiges of the past, while certain genetic mutations make the organism stronger and more competitive. There is a modern example in the form of web applications. Most cloud-hosted web applications reside on dedicated virtual servers racking up charges, using electricity, and hardware resources whether they are in use or not. These applications typically are installed on virtual machines which must be installed, configured, maintained, and patched by developers and operations staff. These solutions do not seamlessly scale based on usage without some human intervention. In a world of software-defined everything, developing applications and deploying them to fixed virtual servers no longer makes sense for many low to moderate intensity application workloads. However, a new genetic evolution has emerged making web applications more scalable and efficient than ever before. Web application genetics have started to mutate with event-driven development and serverless computing as new capabilities. Serverless computing takes Platform as a Service (PaaS) to a new level by completely abstracting the server configurations from the developer, allowing them to focus on the solution rather than the underlying infrastructure. Event-driven development allows code to stay completely dormant until an application service is requested by a user.

Serverless computing manages all creation, scaling, updating, and security patching of underlying server resources. It also abstracts nearly all server details from the developer to eliminate the need for creation of virtual machines, software installation, security patching, advanced configuration, and maintenance. With serverless computing, developers simply create a serverless service and, instead of receiving a new virtual server, they receive a service endpoint and credentials. The service manages provisioning, autoscaling, maintenance, and updates to keep services running continuously. Developers use these endpoints with the same tools and development environments they use today to build clean, elegant serverless solutions.

To capitalize on this efficiency, web applications are increasingly evolving towards serverless computing using advanced serverless components such as Amazon Web Services (AWS) Lambda, Azure Functions, and Google Cloud Functions. Each of these serverless computing environments combines event-driven architecture with serverless computing. It creates incredibly efficient service environments capable of rapidly provisioning on demand to meet user requirements and then scale back down to reduce costs. Serverless database services (AWS Aurora, SQL Azure, Google Cloud Database) are evolving.

Aspects of the cloud’s genetics started to mutate into serverless computing, an architecture model where code execution is completely managed by the cloud provider. Developers now don’t have to manage, provision, and maintain servers when deploying code. They don’t have to define how much storage and database capacity is required pre-deployment, hence reducing the time to production.

How Will Cloud Computing Genes Diversify for Better Efficiency 

AWS Lambda, introduced in 2014, provided an event-driven compute service that lets code run without provisioning or managing servers, essentially executing code on demand. It has since been joined by Azure Functions and Google Cloud Functions. These servers are known as Function as a Service (FaaS), where you pay as you go for execution only and not for underlying resource usage. With AWS Lambda, your service activates when triggered by a request from the client to execute code and when the execution is complete, the resources will be scaled back. Serverless functions are most effective with code that executes for relatively short periods – typically less than 5 minutes. They can be configured to scale automatically, enabling large workloads to make quick work of larger workloads. With serverless design, dormant services are scaled down and don’t run up the bill.

Is the Serverless Computing Genetic Evolution Compatible With Any Tech Ecosystem?

Continuing with our genetic evolution metaphor, any genetic evolution will only survive if it benefits an organism and, in some way, strengthens its survival chances. Similarly, serverless computing must strengthen applications without making serverless applications incompatible with its ecosystem. While all signs are promising, developers and architects are still trying to understand all the opportunities and challenges associated with their initial findings. Serverless computing offers increased productivity while reducing costs, however, it has practical and real-world challenges. A few of those issues include:

  • The technology behind serverless computing is still nascent, so there is limited documentation on ideal usage and best practices. In some cases, it is trial and error for teams utilizing the technology.
  • Serverless technology may require rework of some code. Code that is stateful and that does not follow common web application best practices would have to be rewritten.
  • The programs supporting serverless computing are not yet approved by FedRAMP, however, the approval will happen soon.

Is Serverless Computing Working in the Federal Government?

eGlobalTech is implementing serverless architecture with its Federal engagements. Recently, we collaborated with the Federal Emergency Management Agency (FEMA) to migrate a mission-critical system to an AWS cloud platform. We are now modernizing portions of the system which were identified as good candidates to go event-driven and serverless which will provide massive scalability improvements over the existing solution.

Contact us today at, if you need help strategizing and implementing your Big Data project. 


Copyright 2018 | eGlobalTech | All rights reserved.


Big Data is Not Technology-Focused

Big Data Infographic

The business case for leveraging Big Data is discussed extensively in conferences and publications worldwide – and it could not be clearer – it provides instant insight for better decision making.  As a result, Big Data is on the mind of every business and technology leader from almost every industry sector imaginable, especially the public sector. The Social Security Administration (SSA) uses big data to better identify suspected fraudulent claims and the Securities Exchange Commission (SEC) applies big data to identify nefarious trading activity. Many other Federal agencies are looking at ways to use Big Data to help execute their mission areas.  In addition, a growing market of tools and techniques is now available to help Federal agencies effectively analyze large volumes of disparate, complex, and variable data in order to draw actionable conclusions.  Nevertheless, many of these same leaders are challenged in terms of successfully realizing the benefits Big Data has to offer, mainly because they:

  • Believe it is revolutionary and technology-focused, rather than an iterative and cyclical process.
  • Cannot determine the value of the data available to them. This challenge is multiplied when you consider that this data is consistently and rapidly expanding in terms of volume, variety, and velocity (in part, due to factors such as IoT, social media, video and audio files).
  • Are concerned about security and privacy issues.

Overcoming Big Data Challenges

Although these challenges can seem overwhelming, they can be overcome through a methodical process that is focused on improving your agency’s mission performance. They key is to adopt a use case-driven approach to determine how and when to begin your Big Data migration. Assuming your organization has already developed a Big Data strategy and governance framework, this iterative approach begins with your business (non-IT) stakeholders who support your organization’s primary or core functions. The objective is to use their institutional knowledge to define and prioritize a set of business needs/gaps which would improve their ability to perform their jobs more effectively. Once defined, the march towards Big Data begins in an iterative and phased manner.

Implementing Big Data in an Iterative and Phased Approach

Following the prioritized order, each business need should be decomposed into a use case (including items such as process flows, actors, and impacted IT systems). This decomposition will also help to facilitate the identification of all available data assets which touch upon the use case (private and public). In doing so, it is critical to brainstorm as broadly as possible. If a municipal government was trying to analyze how weather conditions impact downtown traffic patterns, they might use data from local weather reports, traffic camera feeds, social media reports, road condition reports, 911 calls reporting vehicular accidents, maintenance schedules, traffic signal times, etc. – not just the obvious traffic and weather reports.  Each data asset should be assessed for quality (to determine if cleanup is required) and mapped to its primary data source.

Securing Big Data and Establishing Privacy Standards

Now that we have a handle on the data and the data sources that will be analyzed to support our use case, defining the proper security and privacy requirements for the Big Data analytics solution should be more straightforward (rather than defining a broad set of security controls for a Big Data environment in general). These requirements should be based upon the data assets with the highest sensitivity level, as well as the sensitivity levels of any analysis results that might be derived from various combinations of the data assets. The requirements will, in turn, enable the definition of the appropriate security controls that need to be integrated into any solution to reduce risk and prevent possible breaches.

Selecting the Right Big Data Solution

At this point, you are ready to start considering technology solutions to perform your Big Data analysis (such as the analytics, visualization, and warehousing, and tools). The technology you select, including infrastructure, core solutions, and any accelerators, should be based upon the information collected up to this point, including security requirements, data structures (structured vs. semi-structured vs. unstructured), and data volume. Remember, no single technology is required for a Big Data Solution, rather it should be based on specific requirements. Data scientists can then use these technologies to develop algorithms to process the data and interpret the results. Once completed, you should move on to the next use case.

In summary, it is essential to remember that communication, change management, and governance are key to successfully derive any meaningful and usable results from Big Data.  Other key success factors include:

  • Do not start with a technology focus. Instead, concentrate on business/mission requirements that cannot be addressed using traditional data analysis techniques.
  • Augment existing IT investments to address initial use cases first, then scale to support future deployments.
  • After the initial deployment, expand to adjacent use cases, building out a more robust and unified set of core technical capabilities.

These factors will ensure your agency adopts Big Data securely and effectively, achieving results at each iterative step and maximizing the use of your valuable resources.

Contact us today at, if you need help strategizing and implementing your Big Data project. 


Copyright 2018 | eGlobalTech | All rights reserved.

Incubation as a Service: An eGT Labs Story

eGT labs Incubation as a Service image with logo and computer circuit lines

The evolution of digital technologies and practices is advancing at a faster rate than ever before, revealing new and innovative ways to improve productivity and disrupt the norm. Federal agencies, with their shrinking budgets, are finding it difficult to keep up. Although several large-scale IT modernization and digital transformation initiatives are underway to help close the gap, they are predominantly multi-year and multi-million-dollar investments that carry considerable risks for failure. Agile and DevSecOps practices mitigate some of those risks, but often federal agencies are handcuffed by complex contracting processes that typically do not permit experimental innovation or tolerate failures. Lean Startup, a proven model to build, measure, and learn through experimentation, nearly always comes with a hefty price tag. At eGlobalTech (eGT), we witnessed these challenges first-hand and decided to invest in our own incubation program. The strong entrepreneurial spirit instilled by our late founder Sonya Jain drives us to incubate innovative ideas and build reusable tools. This eGT culture naturally led to the perfect chemical concoction that created “eGT Labs”—a corporate sponsored R&D arm focused on incubating high-value, reusable solutions, industry partnerships, and thought leadership designed to further our client’s mission and capabilities—and our Incubation as-a Service model for innovation.

A Grass-Roots Movement

Keeping in line with “necessity is the mother of invention,” a unique and complex customer problem was the seed that gave birth to this model. Our project team at a federal agency built a sophisticated and complex total cost of ownership (TCO) model using Excel spreadsheets. We are talking about 50+ worksheets with 3.5 million heavily nested formulas and about 400-500 columns. It produced magnificent cost predictions that came close to actuals and the tool was regarded as the Rosetta Stone by its users. The problem was that Excel does not scale and it became very hard to keep it up-to-date and fix defects. Besides, loading a 20GB Excel file is no picnic. It was obvious that we needed an automated, web-based platform. As a result, we invested our own resources and created the eTCO application. This gave birth to eGT Labs.

What Services Does eGT Labs Offer?

Our primary goal is to work with existing federal clients with whom we have established contracts. At no additional cost, we provide shrink-wrapped Incubation as-a Service for short-term engagements to prototype solutions that can dramatically improve outcomes. This Incubation-as-a-Service is composed of multi-skilled digital analysts and engineers who work in a DevSecOps model, leveraging leading open-source technologies, tools, and cloud platforms. An engagement could come about through a recognized client need or an opportunity uniquely discovered by our project team. Regardless, we shed our contractor status and act as true partners invested in our client’s success.

Over the past few years, this model has resulted in the following high-value tools and products that are actively utilized in our projects at the Department of Health and Human Services (HHS) and the Department of Homeland Security (DHS):

DevOps Factory™

End-to-end DevSecOps framework designed specifically to align with federal Software Development Lifecycle (SDLC) standards and capable of rapidly transforming nascent business objectives into production-ready, secure, functional, and usable software.


100% open-source tool that automates deployments of entire application stacks to the cloud in a single click. For more information, visit


Automates and integrates web application security vulnerability testing as part of the continuous integration and continuous delivery pipeline, enabling early detection and faster remediation of vulnerabilities while ensuring that only secure code is deployed.

Electronic Total Cost Ownership (eTCO)

Stand-alone, web-based cost analytics model and visualization tool with 400+ metrics enabling users to model “what-if” scenarios and gain deeper insight on true cost of ownership of their data center infrastructure.

Other interrelated services offered by eGT Labs includes:

  • Tools development to support ongoing advancement and maturation of technology tools
  • Managed cloud hosting services
  • Thought leadership on the applicability of emerging concepts and technological practices conveyed through white papers, case studies, and hackathons
  • Industry partnerships with leading software and cloud providers to enhance our knowledge and expertise

Journey into 2018

One of eGT Labs’ latest technology incubation experiments is called “Site Monitor,” a cloud-hosted, web-based tool that continuously monitors the security posture of innumerable enterprise-wide websites and maintains up-to-date compliance with newly released mandates by the Office of Management and Budget (OMB). Site Monitor can perform on-demand scans, access compliance-related data quickly, display data in an easy-to-digest manner, and suggest the appropriate network security upgrades to achieve compliance. While it is actively being used by a prominent cabinet-level agency, we plan to release Site Monitor for general availability this summer for adoption at other agencies. Our roadmap for 2018 is filled with several initiatives concurrently being executed. They include:

  • Establishing additional features and capabilities to our current toolset for integration into new projects
  • Maturing our “Social Profiler,” a social media analytics prototype
  • Releasing white papers on applicability of emerging technologies and practices such as blockchain, machine learning, and artificial intelligence

How Can Customers Leverage our Incubation-as-a-Service?

Customers can reach out to their eGT project teams if they are interested in learning more about one of our eGT Labs products or starting a conversation on how eGT Labs can help kickstart their digital innovation agenda. Contact to learn more.


Copyright 2018 | eGlobalTech | All rights reserved.