The eGT Blog is a place we created to share our ideas and knowledge about what we know and what we have learned working in technology and cybersecurity in the public sector. We passionately believe in our federal clients’ missions to make America stronger, smarter, and safer. Therefore, we created a place to depart ideas on how to advance our government’s innovation and modernization.

Serverless Computing: The Next Cloud Evolution

  1. Cloud Genetics Are Mutating

Throughout evolution, there is usually a period where certain traits of an organism make little sense and become vestiges of the past, while certain genetic mutations make the organism stronger and more competitive. There is a modern example in the form of web applications. Most cloud-hosted web applications reside on dedicated virtual servers racking up charges, using electricity, and hardware resources whether they are in use or not. These applications typically are installed on virtual machines which must be installed, configured, maintained, and patched by developers and operations staff. These solutions do not seamlessly scale based on usage without some human intervention. In a world of software defined everything, developing applications and deploying them to fixed virtual servers no longer makes sense for many low to moderate intensity application workloads. However, a new genetic evolution has emerged making web applications more scalable and efficient than ever before.

Web application genetics have started to mutate with event-driven development and serverless computing as new capabilities. Serverless computing takes Platform as a Service (PaaS) to a new level by completely abstracting the server configurations from the developer, allowing them to focus on the solution rather than the underlying infrastructure. Event-driven development allows code to stay completely dormant until an application service is requested by a user.

Serverless computing manages all creation, scaling, updating, and security patching of underlying server resources. It also abstracts nearly all server details from the developer to eliminate the need for creation of virtual machines, software installation, security patching, advanced configuration, and maintenance. With serverless computing, developers simply create a serverless service and, instead of receiving a new virtual server, they receive a service endpoint and credentials. The service manages provisioning, autoscaling, maintenance, and updates to keep services running continuously. Developers use these endpoints with the same tools and development environments they use today to build clean, elegant serverless solutions.

To capitalize on this efficiency, web applications are increasingly evolving towards serverless computing using advanced serverless components such as Amazon Web Services (AWS) Lambda, Azure Functions, and Google Cloud Functions. Each of these serverless computing environments combines event-driven architecture with serverless computing. It creates incredibly efficient service environments capable of rapidly provisioning on demand to meet user requirements and then scale back down to reduce costs. Serverless database services (AWS Aurora, SQL Azure, Google Cloud Database) are evolving.

Aspects of the cloud’s genetics started to mutate into serverless computing, an architecture model where code execution is completely managed by the cloud provider. Developers now don’t have to manage, provision, and maintain servers when deploying code. They don’t have to define how much storage and database capacity is required pre-deployment, hence reducing the time to production.

  1. How Will Cloud Computing Genes Diversify for Better Efficiency 

AWS Lambda, introduced in 2014, provided an event-driven compute service that lets code run without provisioning or managing servers, essentially executing code on demand. It has since been joined by Azure Functions and Google Cloud Functions. These servers are known as Function as a Service (FaaS), where you pay as you go for execution only and not for underlying resource usage. With AWS Lambda, your service activates when triggered by a request from the client to execute code and when the execution is complete, the resources will be scaled back. Serverless functions are most effective with code that executes for relatively short periods – typically less than 5 minutes. They can be configured to scale automatically, enabling large workloads to make quick work of larger workloads. With serverless design, dormant services are scaled down and don’t run up the bill.

  1. Is the Serverless Computing Genetic Evolution Compatible With Any Tech Ecosystem?

Continuing with our genetic evolution metaphor, any genetic evolution will only survive if it benefits an organism and, in some way, strengthens its survival chances. Similarly, serverless computing must strengthen applications without making serverless applications incompatible with its ecosystem. While all signs are promising, developers and architects are still trying to understand all the opportunities and challenges associated with their initial findings. Serverless computing offers increased productivity while reducing costs, however, it has practical and real-world challenges. A few of those issues include:

  • The technology behind serverless computing is still nascent, so there is limited documentation on ideal usage and best practices. In some cases, it is trial and error for teams utilizing the technology.
  • Serverless technology may require rework of some code. Code that is stateful and that does not follow common web application best practices would have to be rewritten.
  • The programs supporting serverless computing are not yet approved by FedRAMP, however, the approval will happen soon.
  1. Is Serverless Computing Working in the Federal Government?

eGlobalTech is implementing serverless architecture with its Federal engagements. Recently, we collaborated with the Federal Emergency Management Agency (FEMA) to migrate a mission-critical system to an AWS cloud platform. We are now modernizing portions of the system which were identified as good candidates to go event-driven and serverless which will provide massive scalability improvements over the existing solution.

Contact us today at, if you need help strategizing and implementing your Big Data project. 


Copyright 2018 | eGlobalTech | All rights reserved.


Big Data is Not Technology-Focused

The business case for leveraging Big Data is discussed extensively in conferences and publications worldwide – and it could not be clearer – it provides instant insight for better decision making.  As a result, Big Data is on the mind of every business and technology leader from almost every industry sector imaginable, especially the public sector. The Social Security Administration (SSA) uses big data to better identify suspected fraudulent claims and the Securities Exchange Commission (SEC) applies big data to identify nefarious trading activity. Many other Federal agencies are looking at ways to use Big Data to help execute their mission areas.  In addition, a growing market of tools and techniques is now available to help Federal agencies effectively analyze large volumes of disparate, complex, and variable data in order to draw actionable conclusions.  Nevertheless, many of these same leaders are challenged in terms of successfully realizing the benefits Big Data has to offer, mainly because they:

  • Believe it is revolutionary and technology-focused, rather than an iterative and cyclical process.
  • Cannot determine the value of the data available to them. This challenge is multiplied when you consider that this data is consistently and rapidly expanding in terms of volume, variety, and velocity (in part, due to factors such as IoT, social media, video and audio files).
  • Are concerned about security and privacy issues.

Overcoming Big Data Challenges

Although these challenges can seem overwhelming, they can be overcome through a methodical process that is focused on improving your agency’s mission performance. They key is to adopt a use case-driven approach to determine how and when to begin your Big Data migration. Assuming your organization has already developed a Big Data strategy and governance framework, this iterative approach begins with your business (non-IT) stakeholders who support your organization’s primary or core functions. The objective is to use their institutional knowledge to define and prioritize a set of business needs/gaps which would improve their ability to perform their jobs more effectively. Once defined, the march towards Big Data begins in an iterative and phased manner.

Implementing Big Data in an Iterative and Phased Approach

Following the prioritized order, each business need should be decomposed into a use case (including items such as process flows, actors, and impacted IT systems). This decomposition will also help to facilitate the identification of all available data assets which touch upon the use case (private and public). In doing so, it is critical to brainstorm as broadly as possible. If a municipal government was trying to analyze how weather conditions impact downtown traffic patterns, they might use data from local weather reports, traffic camera feeds, social media reports, road condition reports, 911 calls reporting vehicular accidents, maintenance schedules, traffic signal times, etc. – not just the obvious traffic and weather reports.  Each data asset should be assessed for quality (to determine if cleanup is required) and mapped to its primary data source.

Securing Big Data and Establishing Privacy Standards

Now that we have a handle on the data and the data sources that will be analyzed to support our use case, defining the proper security and privacy requirements for the Big Data analytics solution should be more straightforward (rather than defining a broad set of security controls for a Big Data environment in general). These requirements should be based upon the data assets with the highest sensitivity level, as well as the sensitivity levels of any analysis results that might be derived from various combinations of the data assets. The requirements will, in turn, enable the definition of the appropriate security controls that need to be integrated into any solution to reduce risk and prevent possible breaches.

Selecting the Right Big Data Solution

At this point, you are ready to start considering technology solutions to perform your Big Data analysis (such as the analytics, visualization, and warehousing, and tools). The technology you select, including infrastructure, core solutions, and any accelerators, should be based upon the information collected up to this point, including security requirements, data structures (structured vs. semi-structured vs. unstructured), and data volume. Remember, no single technology is required for a Big Data Solution, rather it should be based on specific requirements. Data scientists can then use these technologies to develop algorithms to process the data and interpret the results. Once completed, you should move on to the next use case.

In summary, it is essential to remember that communication, change management, and governance are key to successfully deriving any meaningful and usable results from Big Data.  Other key success factors include:

  • Do not start with a technology focus. Instead, concentrate on business/mission requirements that cannot be addressed using traditional data analysis techniques.
  • Augment existing IT investments to address initial use cases first, then scale to support future deployments.
  • After the initial deployment, expand to adjacent use cases, building out a more robust and unified set of core technical capabilities.

These factors will ensure your agency adopts Big Data securely and effectively, achieving results at each iterative step and maximizing the use of your valuable resources.

Contact us today at, if you need help strategizing and implementing your Big Data project. 


Copyright 2018 | eGlobalTech | All rights reserved.

Incubation as-a Service: An eGT Labs Story

eGT labs image with logo and computer circuit lines

The evolution of digital technologies and practices is advancing at a faster rate than ever before, revealing new and innovative ways to improve productivity and disrupt the norm. Federal agencies, with their shrinking budgets, are finding it difficult to keep up. Although several large-scale IT modernization and digital transformation initiatives are underway to help close the gap, they are predominantly multi-year and multi-million-dollar investments that carry considerable risks for failure. Agile and DevSecOps practices mitigate some of those risks, but often federal agencies are handcuffed by complex contracting processes that typically do not permit experimental innovation or tolerate failures. Lean Startup, a proven model to build, measure, and learn through experimentation, nearly always comes with a hefty price tag. At eGlobalTech (eGT), we witnessed these challenges first-hand and decided to invest in our own incubation program. The strong entrepreneurial spirit instilled by our late founder Sonya Jain drives us to incubate innovative ideas and build reusable tools. This eGT culture naturally led to the perfect chemical concoction that created “eGT Labs”—a corporate sponsored R&D arm focused on incubating high-value, reusable solutions, industry partnerships, and thought leadership designed to further our client’s mission and capabilities—and our Incubation as-a Service model for innovation.

A Grass-Roots Movement

Keeping in line with “necessity is the mother of invention,” a unique and complex customer problem was the seed that gave birth to this model. Our project team at a federal agency built a sophisticated and complex total cost of ownership (TCO) model using Excel spreadsheets. We are talking about 50+ worksheets with 3.5 million heavily nested formulas and about 400-500 columns. It produced magnificent cost predictions that came close to actuals and the tool was regarded as the Rosetta Stone by its users. The problem was that Excel does not scale and it became very hard to keep it up-to-date and fix defects. Besides, loading a 20GB Excel file is no picnic. It was obvious that we needed an automated, web-based platform. As a result, we invested our own resources and created the eTCO application. This gave birth to eGT Labs.

What Services Does eGT Labs Offer?

Our primary goal is to work with existing federal clients with whom we have established contracts. At no additional cost, we provide shrink-wrapped Incubation as-a Service for short-term engagements to prototype solutions that can dramatically improve outcomes. This Incubation-as-a-Service is composed of multi-skilled digital analysts and engineers who work in a DevSecOps model, leveraging leading open-source technologies, tools, and cloud platforms. An engagement could come about through a recognized client need or an opportunity uniquely discovered by our project team. Regardless, we shed our contractor status and act as true partners invested in our client’s success.

Over the past few years, this model has resulted in the following high-value tools and products that are actively utilized in our projects at the Department of Health and Human Services (HHS) and the Department of Homeland Security (DHS):

DevOps Factory™

End-to-end DevSecOps framework designed specifically to align with federal Software Development Lifecycle (SDLC) standards and capable of rapidly transforming nascent business objectives into production-ready, secure, functional, and usable software.


100% open-source tool that automates deployments of entire application stacks to the cloud in a single click. For more information, visit


Automates and integrates web application security vulnerability testing as part of the continuous integration and continuous delivery pipeline, enabling early detection and faster remediation of vulnerabilities while ensuring that only secure code is deployed.

Electronic Total Cost Ownership (eTCO)

Stand-alone, web-based cost analytics model and visualization tool with 400+ metrics enabling users to model “what-if” scenarios and gain deeper insight on true cost of ownership of their data center infrastructure.

Other interrelated services offered by eGT Labs includes:

  • Tools development to support ongoing advancement and maturation of technology tools
  • Managed cloud hosting services
  • Thought leadership on the applicability of emerging concepts and technological practices conveyed through white papers, case studies, and hackathons
  • Industry partnerships with leading software and cloud providers to enhance our knowledge and expertise

Journey into 2018

One of eGT Labs’ latest technology incubation experiments is called “Site Monitor,” a cloud-hosted, web-based tool that continuously monitors the security posture of innumerable enterprise-wide websites and maintains up-to-date compliance with newly released mandates by the Office of Management and Budget (OMB). Site Monitor can perform on-demand scans, access compliance-related data quickly, display data in an easy-to-digest manner, and suggest the appropriate network security upgrades to achieve compliance. While it is actively being used by a prominent cabinet-level agency, we plan to release Site Monitor for general availability this summer for adoption at other agencies. Our roadmap for 2018 is filled with several initiatives concurrently being executed. They include:

  • Establishing additional features and capabilities to our current toolset for integration into new projects
  • Maturing our “Social Profiler,” a social media analytics prototype
  • Releasing white papers on applicability of emerging technologies and practices such as blockchain, machine learning, and artificial intelligence

How Can Customers Leverage our Incubation-as-a-Service?

Customers can reach out to their eGT project teams if they are interested in learning more about one of our eGT Labs products or starting a conversation on how eGT Labs can help kickstart their digital innovation agenda. Contact to learn more.


Copyright 2018 | eGlobalTech | All rights reserved.