The Uber Hack – Advanced Persistent Teenager Threat 

Uber, the ride hailing giant, confirmed a major system breach that allowed a hacker access to Vsphere, google workplace, AWS, and much more, all with full admin rights. 

In what that will be remembered as one of the most embarrassing hacks in recorded history, the hacker posted screenshots to the vx-underground twitter handle, from the console of the hacked platforms as proof, and included internal financial data and screenshots of SentinelOne dashboard.

If you are going to hack a role, choose “Incident response team member” for optimal results

Before we dive into the “how”, let’s first explore the role of an Incident Response (IR) team member. When an incident (hack, production failure…) occurs, the incident response team are the company’s “first responders”, when there is a fire, they are the “firefighters”. Due to the importance of their job, IR teams get an unprecedented level of access, usually when they are on-call – making them an ideal target.   

Zero Trust vs “Uber” Trust 

The hacker, who either targeted the IR team or stumbled upon it by chance, was able to socially engineer a member of the IR team utilizing a technique known as “MFA Fatigue” – bombarding the user with 2nd factor approval requests. The hacker proceeded to contact the IR team member via WhatsApp, posing as an Uber IT support rep. He then claimed the requests will end upon approval and that the IR team member simply needs to approve the login to end the flood of requests.

Following the approval, the hacker was able to enroll his own device as a 2nd factor, giving him everything needed to login to the rest of the applications in the environment.
  

When the hacker was able to access Uber’s internal network, he located a shared folder containing a powershell script with admin credentials and used them to obtain access to the company’s Privileged Access Management (PAM) platform, thus gaining access to Uber’s entire network.

While the principals of “Zero Trust” call to reduce attack surface by segregating networks, apps and access to avoid?, Uber’s architecture provided the user with “Uber” rights. 

The hacker path:

Social Engineering  => DUO MFA => 2nd factor approval =>Added Device to 2nd factor=> VPN access => Viewed internal network => Powershell script with privileged credentials => Access to PAM => GOD_MODE

Centralized Authentication: 

The fault to this ordeal is an inherent one, and a part of every centralized authentication, lets elaborate a bit: back in the day we used to manage credentials per application or data repository, a breached identity “Attack Surface” would apply only to a single applications; a 1:1 attribution between identity and action, but we had to manage a lot of credentials. 

This method was decentralized by nature, from a scalability aspect an impossible task. 


To circumvent the scalability issue we created Identity Providers (IDP) a centralized approach that enabled us to share an identity across applications, using a single set of credentials, increasing our potential attack surface across the organization, understanding this risk we added authentication factors, each with its own flaws. 

The tradeoff between decentralized authentication and IDP led to better scalability, user experience and answered operational needs, it also led to hacks just like the Uber Hack, Centralized Authentication = Once you are in you are in, have fun! 

What can we do? 

But Uber did have a DUA MFA! And SentinelOne! So what did they do wrong? 

Nothing really, they followed industry standards, the problem is that these standards will not protect you once a hacker is in. 

Decentralized authentication is not coming back, nor should it! But what if we took a different approach and decouple authorization from authentication by using dynamic access policies, instead of just adding authentication factors, we can add authorization factors according to risk.

This model (shown above) treats authorization as a dynamic factor that correlates with the “risk circle” adding authorization factors that will provide an extra level of assurance both in verifying the user and preventing human errors caused by standing privileges

In Apono we solved this issue by enabling users to bundle permissions and associate them with users, creating Dynamic Access Flows that connect the risk circles above adding Multi Factor Authorization into the access policy.  

Each circle of risk is represented as a permission bundle, to each the admin can create a policy that combines different authorization factors, and a time frame for the access: 

  • User Justification – User must write a reason for needing access 
  • User + Admin Justification – When an access request is created both the requester and admin need to provide a reason to the approver.
  • Owner Approval – Access will be granted when the owner of the groups of permissions approves it 
  • IDP Owner Approval – Only when IDP owner approves the request the access will be granted 
  • Restricted Timely Access – Access is only open to a defined period of time, automatically revoked upon the end of the defined period 

With Apono you will be able to create Declarative Access Policies, defining authorization factors using our declarative access flow wizard 

Using Apono’s Declarative Access Flow Wizard you will be able to create access flows with an array of authorization factors, approver groups, two-step human authorization.

Effective Privilege Management in the Cloud – Mission Impossible?

TLDR: Overprivileged access is a natural consequence of manually granting and revoking access to cloud assets and environments. What DevOps teams need are tools to automate the process. Apono automatically discovers cloud resources and their standing privileges, centralizing all cloud access in a single platform so you don’t have to deal with another access ticket ever again.

How much access to cloud resources do your developers really need?

In the ideal world, you would give access to whoever needs it just for the time they need it, and the “Least Privilege,” (meaning both “Just-in-Time,” and “Just Enough”) access policies would be the norm.

But we don’t live in an ideal world.

Cloud infrastructure is dynamic and constantly changing. Some resources, such as cloud data sets, may include more than one database, each with its own set of access requirements. For example, a user could require read/write rights for one and read-only rights for another.

In theory, you should keep track of all these access rights and revoke and grant them as needed. But in practice, we don’t have the tools to automate cloud access management, which leads us to give more access than we should.

What is overprivileged access?

Overprivileged access is when an identity is granted more privileges than the user needs to do their job. In the cloud, this happens all the time.

For example, a developer needs access to an S3 bucket for a couple of hours each Monday in June to do some testing. After they are done, they won’t need that access again until a sprint with a task requiring it comes up.

If you were to go by the book, you would need to manually give them access and then manually revoke it on Mondays for four weeks.

This is simply not sustainable. The ratio between DevOps and engineers is already 1 to 10, and It’s not possible for DevOps engineers to be constantly dropping what they are doing to provision or revoke access. We’ve got other stuff to do.

When a developer needs access to a sensitive S3 bucket that contains customer data, it’s often not clearly defined what permissions will be enough for the user to do their job. We address this problem by providing more access than we should in order to avoid becoming a bottleneck. As a result, the whole role gets over-privileged permissions. What we’re left with, is an Overprivileged Role Level Access that affects a large number of users, and is not likely to ever be revoked.

Another common way overprivileged access creeps into your cloud stack is when “Read/Write” access is granted to users who need only Read rights for a limited time. An overprivileged identity with Write access can do great damage if it’s compromised.

To make matters worse, managing access is kind of dull. Nothing is less exciting than dealing with another access ticket. Managing access is the task you want to get over with as quickly as possible.

Without automation, it’s impossible to implement granular access provisioning, revoke access in a timely manner, or even just keep tabs on existing policies. And that, folks, is how overprivileged access to cloud resources became the norm.

Why overprivileged access is a problem

Today, overprivileged access is everywhere. And it’s a serious problem for several reasons:

1) Attack Surface = Permissions x Sensitive Cloud Resources

Overprivileged access is one of the biggest security risks in the cloud. In recent years, the vast majority of breaches (81%) are directly related to passwords that were either stolen or too weak.

But it’s not just about passwords. It’s about the way cloud resources are accessed and used.

Overprivileged access significantly increases the blast radius of an attack. When an attacker obtains a set of valid credentials, the permissions linked to those credentials determine what they can and cannot do in your environment.  The more permissions a compromised identity has, the bigger the attack surface.

In the cloud era,  permissions are your last line of defense: the right permissions are what prevent unauthorized identities from accessing your company’s sensitive data. Therefore, tailoring access to the task at hand will drastically reduce the risk.

2) Complexity & Lack of Visibility

Another issue with overprivileged access is that it makes cloud environments more complex than they need to be. When everyone has full access to everything, it’s very difficult to keep track of what’s going on.

This can make it hard to troubleshoot issues, diagnose problems, and comply with regulations.

The harm that can come from overprivileged access is not coming just from malicious actors. All humans make mistakes, and your employees are human.

3) Mistakes will happen

According to 2022 Data Breach Investigations Report, human error is to blame for eight out of 10 data breaches. Overprivileged access significantly increases the risk of such mistakes and the resulting fallout.

The burden of access management falls onto the DevOps teams.

Traditionally, access management has been the domain of IT security, but as cloud adoption increased, the burden of managing cloud access has fallen upon the shoulders of those responsible for the cloud infrastructure.

More and more DevOps engineers are finding themselves in charge of their organizations’ access management policies.

In today’s public cloud reality, provisioning of access is becoming an ever more important part of DevOps engineers’ day-to-day work.  And that’s where the balancing act begins:

  • You want to give developers the freedom to work on whatever they need to get the job done.
  • You know that overprivileged access is a dangerous thing, but you can’t spend every hour of every day stopping what you are doing to give and then revoke access to cloud resources.

A cloud-native approach to access provisioning

Moving to the cloud is a transition towards a more agile way of working, which necessitates a subsequent shift to dynamic permission management.

So what are we to do?

The answer, as with most things in a DevOps engineer’s life, lies in automation. We need to find a way to automate cloud access management so that DevOps engineers can focus on their actual jobs and not spend all their time managing access.

We need a tool that is:

– Easy to use

– Scalable

– Seamless

And that is where Apono comes in.

Apono simplifies cloud access management. Our technology discovers and eliminates standing privileges using contextual dynamic access automation that enforce Just-In-Time and Just Enough Access.

With Apono, It is now possible to seamlessly and securely manage permissions and comply with regulations while providing a frictionless end-user experience.

Are you ready to never have to worry about cloud access provisioning again? Get in touch with us today.

What we can learn from the LastPass hack

LastPass, a password manager with over 33M users reported an unauthorized party hacked into its development environment, the hackers were able to gain access through a single breached developer account. 

Don’t act all surprised, getting hacked is a “WHEN” not an “IF” question 

Everyone gets hacked eventually, the bigger a company is the bigger the target sign on it back. But LastPass is no ordinary company, the risk that is entailed in a service that generates and stores passwords, it is a “Key Master” which means if customers’ password were compromised the attack surface trickles down, potentially affecting customers and their customers/users. 

LastPass reported the breach did not reach any customer data, only the company source code was taken, LastPass User’s rejoice!  but it did take the company two weeks to assure that was the case, which sounds like a long period of time to evaluate the affect of a hack, but its actually not as Allan Liska, an analyst for Recorded Future commenced for Bloomberg:

“While two weeks might seem like a long time to some, it can take a while for incident response teams to fully assess and report on a situation,” he said. “it will take time to fully determine the extent of any damage that may have been as result of the breach. However, for now it appears to not be client-impacting.” Allan Liska 

“We got hacked!” knowing is half the battle, what got hacked is the other half

Some of you might ask: “ Why does it take two weeks?” a valid question, a lot can happen in two weeks, just imagine what a hacker can do with two weeks of unrestricted access or what you can do if you had two weeks off. The reason why it took two weeks is simple, understanding the attack surface of the hack requires knowing the permissions the breach identity has, which are now in the hands of the attacker;   

or in other words:

Breached Users Permissions = Potential Attack Surface

The potential attack surface of a company that stores passwords is enormous, without mapping the different databases and applications the breached developer’s identity had while he was breached required an incident response team to investigate and assess the blast radius of the attack.

Attack Surface Vs Blast Radius

“Attack Surface” represents the potential impact of an attack, meaning the total amount of services, databases, and applications a breached identity had access to.

״Blast Radius״ represents the total impact of the security event, or in other words what are the actions and data that have been taken by the breached user.

To understand a security event attack surface we need to understand which permissions the breached identity had. , To understand the Blast Radius we need to check each action that could have been made by the attacker and investigate the implication of those actions.

WHY does it take two weeks? You might ask because we usually do not revoke access when users are done with it, we give excessive privileges regardless of the task in hand, and lastly, we do not have a proper way to monitor cloud access, which means investigating the blast radius is a tedious task. 

Have no fear, Apono is here

The reason why we created Apono is to solve this exact scenario, Apono’s centralized cloud access management solution takes a “Least Privileges by Default” , mapping cloud access, policies, and resources and attributing them to users, then suggesting how to convert them to “Just-in-Time” and “Just Enough” dynamic policies providing users with a granular level of resources for the amount of time they need it, then the access is automatically revoked! Apono records the entire approval timeline, so you can know who accessed what, when and who approved it.

Apono drastically cuts down the attack surface by assuring granular timely access, our access activity monitoring capabilities assure no standing privileges are jeopardizing your organization. Our 1:1 access attribution capabilities assure investigating the Blast Radius of the attack is a breeze. 

How we passed our SOC2 compliance certification in just 6 weeks with Apono

We recently went through the SOC2 process and are happy to report that we successfully passed our audit! Generating a SOC 2 Type 1 Report generally takes up to six months. In our case, the entire process took only 6 weeks, and we wanted to share how we did it.

TLDR: We used Apono’s cloud-native privileged access management solution to streamline our access review process and make SOC2 audit much easier for us (and our auditor)

Our SOC2 journey

If you serve customers in regulated industries such as healthcare, finance, or the public sector, you will likely need to obtain SOC2 certification at some point.

For those who don’t know, SOC2 is the gold standard for security certifications. It is becoming increasingly common for SaaS companies to get SOC2 certified to reassure customers that all the necessary controls are in place to protect their data.

SOC2 reports measure a company’s security through the lens of AICPA’s Trust Services Criteria across five major categories:

  • Security – How effectively do you protect critical systems against unauthorized access?
  • Availability – How do you facilitate customer access to systems, including business continuity measures during and after an attack?
  • Processing Integrity – How do you upkeep all promised services’ functionality, including timeliness, accuracy, completeness, and integrity of authorization protocols?
  • Confidentiality – How do you safeguard all information classified as protected?
  • Privacy – How do you safeguard all personal information and personally identifiable information (PII)?

The SOC2 compliance report is a public attestation that your systems and controls have been assessed by an independent auditing firm and that they meet or exceed the standards for security, availability, processing integrity, confidentiality, and privacy.

The SOC2 certification process is notoriously long and arduous, but we are happy to report that we obtained our SOC2 certification in just six weeks from start to finish.

Apono helped us in two ways:

  • Generate access review in a matter of seconds
  • Provide auditors with a live view of access to our production environment

Meeting SOC2 security requirement

SOC2 compliance covers a lot of ground and involves solidifying company policies, including access to sensitive resources covering both physical and digital access control.

We are cloud-native, so physical protections around the data centers don’t apply to us. Access to digital resources is another matter. The problem with cloud resources is you don’t hack; you log into it. That’s why access control is such an important part of SOC2.

SOC2 Access Control Requirements

SOC has several controls for access. Auditors will want to see that you have strong controls around:

  • Who has access to what
  • What can they do with that access
  • How you monitor and restrict access
  • How do you uphold the Least Privilege principle
  • How do you enforce the Separation of duties and roles
  • How do you handle employee onboarding and offboarding

To meet these requirements, you’ll need to generate an access review report that includes:

  • A list of all users and their roles
  • A list of all systems and applications that each user has access to
  • What each user can do with that access (e.g., read-only, write, execute, etc.)
  • Procedures for granting and revoking access

The access review report is one of the most time-consuming and tedious parts of the SOC2 process. It involves manually reviewing Access Control Lists (ACLs) and then comparing them to lists of employees and their job descriptions to see if there are any discrepancies.

Sifting through all of that data is a huge pain, but we were able to generate an access review report in just a few seconds. Apono’s platform automatically and continuously maps out user roles and permissions across all systems and applications. So it was effortless to generate a report that includes all of the information required by SOC2.

Not only did this save us a ton of time, but it also ensured that our access review report was 100% accurate.

Moreover, we could automatically generate an access review report anytime we needed it during the certification process. This was incredibly useful because it meant we could easily re-run the report to reflect any changes in personnel or systems.

This huge time-saver allowed us to focus on other aspects of SOC2 compliance. Going forward, we can easily run the report anytime on demand if there are concerns about potential unauthorized access.

Our auditor was impressed with how quickly we could supply the access information they needed.

Access to production environment: live view

It’s not enough to have controls in place – you also need to be able to monitor and audit access on an ongoing basis.

Auditors will want evidence that you’re regularly reviewing and revoking access.

This is important for two reasons:

  • To make sure that the controls are being followed
  • To be able to detect and investigate misuse of data or systems

Auditors will want to access logs to see who did what when they did it, and from where. We could provide them with something better – a live view of access to our production environment that they could monitor in real time.

This gave them visibility into our entire system and allowed them to see exactly who had access to what resources and what they were doing with that access. We were able to give our auditor a real-time view of who was logged in, what they were doing, and from where. This provided valuable insights and evidence that our access controls were working as intended. This was a huge selling point for our auditor.

Overall, Apono was an invaluable tool for streamlining our SOC2 compliance process. 

But it’s not just about passing the SOC2 compliance certification in record time (although that is a huge plus!). It’s about handling your cloud access in a way that’s secure, efficient, and scalable for the long haul. So if you’re looking for a platform for managing access control and compliance in the cloud, book a demo with Apono today. We’d be happy to show you how our platform can help you become secure and compliant while maintaining your productivity and agility.

Top 5 AWS Permissions Management Traps DevOps Leaders Must Avoid

As born-in-the cloud organizations grow, natively managed Identity and Access Management (IAM) tools are becoming a growing concern. Although DevOps teams tend to bear the burden of cloud IAM provisioning, the operational challenges transcend functional silos. Even when SREs and infrastructure teams are closely aligned with security leaders, using native IAM tools to provision access with granular control is unsustainable. No one would contest the need for authorized personnel to get “Just Enough” access whenever they need it – “Just in Time” (aka JIT). Still, teams managing cloud-first deployments struggle to deliver effective access control at scale. While regulatory compliance requirements can act as a trigger for business continuity enablement, many companies are containing unacceptable levels of risk in the form of “cloud IAM debt”. The following list of cloud permissions management traps may sound familiar to DevOps leaders. Avoiding them is trickier than you might think!

  1. Attempting to solve permissions management as an engineering challenge.
    In a perfect world, any authorized stakeholder could access just enough cloud resources to get the job done “just in time”. In practice, cloud Identity and Access Management  (IAM) policy configurations are not only complex, but a dynamic work in progress. When DevOps teams do attempt to provision just the right mix of AWS IAM configuration accounting for policy types, permission boundaries, and ACLs, the resulting homegrown solution rarely scales over time. Although DevOps and SRE teams own cloud IAM provisioning, risk management considerations define InfoSec governance.  Without clearly defined processes to determine how data governance guardrails can support IAM provisioning, such homegrown solutions cannot address the business challenge. 
  2. Letting compliance data governance requirements define IAM management
    To support smooth operations, most DevOps teams tend to over-provision as a matter of course. As the business matures, this approach does not support risk management considerations (e.g. privileged access to and governance of regulated or otherwise sensitive customer data). Once compliance requirements enter the mix, productivity inevitably suffers.  Without dedicated security controls to address usage attribution, reviews, and approval processes, DevOps teams tend to lose control. 
  3. Ignoring the need for an enterprise-wide user provisioning workflow
    The reality of JIT access requirements tends to be more dynamic than anyone can anticipate. The solution must therefore address the challenge holistically beyond the scope of any single functional team (SREs and DevOps vs infrastructure teams or InfoSec). Although addressing standard ad-hoc scenarios such as on-call personnel or “break glass” access certainly represent a good start, a more thorough analysis tends to uncover multiple use cases to address. Some situations will require a human approver’s meditation – especially when supporting access to PII data assets when absolutely necessary. Time-sensitive access scenarios such as “on-call” shifts are good candidates for unmediated automation.
  4. Neglecting the impact of infrastructure teams
    When ongoing IAM provisioning policies do not address JIT access requirements, support ticket fatigue could overwhelm cloud infrastructure teams. As organizations increasingly rely on manual processes, it is imperative to identify opportunities to reduce backlog. Even a simple requirement to enable CLI access while supporting SSO connectivity can linger for long periods of time. Although tagging conventions can help to address the bigger picture, lack of collaborative planning across functional silos often prevents effective implementation of holistic enterprise-wide solutions
  5. Tolerating standing privileges as a necessary evil
    Security teams are well aware of the benefits of enforcing a zero standing privilege (ZSP) operational model, which eliminates “always on” access and therefore reduces the attack surface dramatically. This straight-forward goal is tricky to achieve beyond the scope of security. Established DevOps success metrics and related priorities rarely address the discovery of standing privileges – let alone a structured operational model to eliminate them entirely. As a result, organizations have come to terms with standing privileges as an unavoidable security blind spot. Interestingly, the benefits of usage monitoring and attribution of identities to resources transcends risk management considerations.  By adopting a “shift left” approach of IAM provisioning, DevOps teams are discovering new opportunities to improve success metrics such as mean time to repair (MTTR). 

Getting cloud IAM provisioning right can only succeed by addressing the manual workflows that currently support multiple teams – namely DevOps, infrastructure, and security. The imperative to remove bottlenecks impacts the business as a whole, but also the success of established functional departments. Once priorities and goals are clearly aligned across departments, the solution is a natural next step. 

Learn how Apono empowers teams to improve performance without compromising on security!

Privileged Access Governance

How a DevSecOps Initiative Could Have Prevented the IKEA Canada Privacy Breach

Earlier this week, IKEA Canada confirmed that an employee had accessed private customer information. Although the official announcement did not provide details, it’s a safe bet to assume that controls related to data governance and regulatory compliance are the primary guardrails that led to the revelation. Unfortunately, this particular case hardly represents an isolated incident. 

While data loss is on the list of most concerning threats to DevSecOps success, Identity and Privileged Access Management (IAM & PAM) are at the top.

Regulatory compliance can be an effective guardrail. Still, infrastructure and operations leaders are united on the urgent need to implement a DevSecOps initiative. Regardless of where organizations are on their DevOps journey, a 2021 Cloud Security Alliance survey confirms the tightly coupled relationship between privileged access and DevSecOps success. While data loss is on the list of most concerning threats to DevSecOps success, Identity and Privileged Access Management (IAM & PAM) are at the top. Regardless of the maturity of the DevSecOps journey, the DevOps community clearly faces a mounting challenge. 

Who Controls Privileged Access to What and When?

By IKEA Canada’s own admission, an employee used a “generic internet search” to query personally identifiable information (consumer PII). In other words, an over-privileged user or machine identity queried a shared data asset that included restricted information. To make matters worse, no controls were in place to prevent the privacy breach from recurring over a 72 hour period before security operations teams were alerted. 

Effectively answering the following questions will impact every department spanning IT, infrastructure engineers, application developers, and security operations:

  • Who requests (and approves) privileged access to sensitive data?
  • What assets contain sensitive data?
  • When is privileged access warranted by authorized parties?


How Dynamic Privileged Access Could Prevent Data Exposure

The shared high-level goal is to strike the right balance between “Just Enough” privileged access to address security concerns, and “Just in Time” access grants to ensure smooth business operations.

The shared high-level goal is to strike the right balance between “Just Enough” privileged access to address security concerns, and “Just in Time” access grants to ensure smooth business operations. For simplicity, let’s assume the sensitive information was stored in one shared database functioning as a single point of failure enabling unauthorized access to sensitive data. Without an enterprise-wide DevSecOps initiative in place, the engineers charged with developing and maintaining critical systems typically face an impossible choice between bad and worse. By restricting access to data to authorized personnel only, engineers could theoretically prevent illicit access. Unfortunately, using legacy technology to implement such measures would effectively cripple business operations. This tradeoff is familiar to anyone grappling with static role-based access control (RBAC). As DevOps transformation initiatives deepen, enterprises have begun to explore dynamic access workflows that account for requester, approver, asset, and duration. Taking this approach a step further, teams with significant production workloads in the cloud can leverage tagging practices that clearly separate data assets that contain sensitive information (e.g. customer PII). 

The DevSecOps Transformation Challenge

By supporting dynamically contextualized access to sensitive data, teams can get the job done while eliminating unauthorized parties from ever exposing customer PII in the first place. 


DevSecOps can only be successful by addressing the three core elements of security, namely people, culture, and technology. Long-term collaboration between people can create the foundations that build bridges that transcend traditional organizational silos (e.g. application developers working alongside security operations practitioners). It’s up to C-level leadership to embrace the success of isolated initiatives and build out processes that permeate throughout the organization. Finally, disruptive technologies focused on the key challenges (namely cloud IAM and PAM) are critical to empower the workforce to step up and embrace positive change. By supporting dynamically contextualized access to sensitive data, teams can get the job done while eliminating unauthorized parties from ever exposing customer PII in the first place. 

Ready to Embrace Cloud-first Privileged Access? 

Learn how Apono’s approach to cloud-first Privileged Access Management enables DevSecOps Transformation!