In this webinar, we discuss the evolving nature of IT environments, the need for a security culture shift, the challenges and opportunities in modern IT security and the balance between security and user friendliness.
“Three main changes have occurred in the modern IT environment. The first is that it’s not just an IT department. The modern IT department encapsulates engineering as well because once we’ve shifted to the cloud, we’ve actually shifted the environment and infrastructure to not only belong to what was considered IT but to actually be in the hands of the engineering team, whether they like it or not. Two other pieces: It’s a larger environment, it’s a more complex environment and at the same time, it’s so much easier to make it larger and complex, because today, a click of a button, you can quadruple the size of your environment. The third thing here is stricter regulations we’re seeing in the market today when it comes to managing access.”
“The modern IT environment is this great engine of next-gen anything—AI everything, and microservices everything. What we’re seeing right now is an inflection point where enough people are starting to do multi-factor. We’ve said for a long time that a lot of attacks would go away if we did multi-factor, and we’ve reached that inflection point and the attackers aren’t stopping. I thought once everyone was running multi-vectory, you should stop, go home, game over we’ve won. But funny enough, the criminals are working around that. We’re starting to see phishing attacks, flood attacks, proxy attacks.”
“Two biggest challenges are baggage and culture. Baggage is legacy, meaning we build something and tend to leave it alone. If it ain’t broke, don’t fix it, then we just layer things on top of it, so we end up with layers and layers of baggage that we have to deal with and unwind. If you’re looking at zero-trust as one more layer, then you’re looking at it wrong. I think you need to look at zero trust as an opportunity to simplify. It’s also a cultural exploration, too, because it’s a change in the way we think about security.”
“When everyone started with the Cloud, it really changed things in containers and how we think about permissions to those environments. It’s radically different from how we used to treat a network.”
Access management requires a careful balancing act between access control and productivity. On one hand, privileged access exposes the organization to risks. On the other hand, if we restrict it too much, we end up with bottlenecks resulting in a lack of productivity.
“Today customers are expecting very fast value, so your teams are required to perform their work quickly with as least friction as possible. I think most companies today are in a lose-lose situation. Most companies today are somewhat over privileged in their environment or maybe a lot over privileged.”
“For the longest time, we’ve thought about where the user is going. When we think about just-in-time, if you don’t have a good infrastructure component to do it, you’re going to prioritize very rigorously, you’re going to look at your most privileged systems, you’re going to look at your most privileged sensitive data, and you’re going to look at the things that lure in adversaries, but I would argue, just like with any other control, the more easy it is to deploy it, the better the user experience.”
“We’re in a really neat place when it comes to juxtaposing security with user experience. And we as security people, have always made it harder for users. That’s just what we do. We’ve had password requirements and then we just layer on additional password requirements, which just force users to do unnatural things. We’re at a point now where I think we can provide better user experiences with actually up-leveling that security. Part of that is the workflows around the automation and figuring out which applications are touching which data.”
“I keep hearing that the need for ease of use for users and simplicity is becoming a critical attribute of a security system, and if you’re not starting with that in mind and making sure that whatever control you’re putting in place takes that user experience or administration in mind, then it’s probably not going to scale for this new, modern world, which is very agile.”
Zero trust is a concept focused on limiting access to specific resources based on the principle of “never trust, always verify.” It involves dynamically adjusting trust boundaries, adopting a policy-driven approach, and relying on trust signals and telemetry to enforce security.
There’s been a shift away from traditional network-centric security to user and identity-centric security. Zero trust is seen as a way to achieve a more secure and adaptable security posture in today’s diverse and dynamic IT environments. Zero trust is not just a framework but a mindset shift that needs to be integrated into processes and DNA. It’s an ongoing process rather than a one-time implementation.
“I think the network has somewhat shifted into the management of the cloud itself. At some level, it’s now a policy that’s being managed in the cloud, so the network itself is just another resource. Somewhat similar to how zero trust moves the controls into the resources, now we’re managing these policies across multiple clouds and multiple applications, and that’s actually managing access today . . . Having all the roles very granularly defined is something that’s very hard to accomplish. That’s when you need to build your strategy around what’s most important. What are the crown jewels? Let’s start off from there and then slowly build it rather than trying to achieve zero trust over the whole environment.”
“We shouldn’t be able to do authorization and get our least privileged access (whatever those entitlements and privileges are) all the time. Why should those be standing? If the context and conditions change or if I’m doing something inappropriate, I should be able to revoke that trust. I’m trusting the person to authenticate. I’m trusting the person within my application and that policy enforcement, the idea of I’m going to do just-in-time access or I’m going to do risk-based authentication I think is the critical differentiator between zero trust and what we used to do.”
“This is the natural inevitability of the world we find ourselves in. We’re all clowned, we’re all mobile, we need the same security constructs across everything that we do. I always look at zero trust as a lifestyle choice. It’s a change of behaviors. It’s an evolution of how we think about security and moving security to the edge, moving security to the endpoints that end up being the user on the access device accessing the application where the data lives.”
“Zero trust means we are no longer trusting someone just because they have an IP on the network. We are going to map users to applications, workloads, servers, etc. It’s about starting with user identity and mapping to what they need. Not the network. You don’t give access to the network, you give access to the specific thing you need.”
For many organizations using AWS, the challenge of maintaining a least-privilege posture in their cloud operations is becoming increasingly difficult. This difficulty stems from the need to build access systems from scratch, remodel legacy tools, and prepare for future cloud service add-ons.
In addition, organizations are struggling with creating and managing AWS IAM users and roles, especially when their team grows and needs more accounts. This is why AWS created the AWS Identity Center (formerly known as AWS SSO). It can act as a standalone identity provider for logging into AWS, or it can connect with existing providers, such as Okta and Google.
In this post, we’ll be discussing the methods available for setting up and managing identities in the AWS IAM Identity Center—including the 5 steps everyone must do.
The IAM Identity Center, which follows AWS Single Sign-On, is a cloud service that allows you to grant your users access to the AWS Portal or CLI across multiple AWS accounts. The name change to IAM Identity Center highlights its foundation in IAM while showcasing its extended functionalities and endorsed role. By default, AWS IAM Identity Center now provides a directory that you can use to create users, organize them in groups, and set permissions across those groups.
Through this platform, it’s possible to manage access to various AWS accounts and applications. This approach is recommended as the primary gateway to AWS, as it empowers you to select your preferred identity source for seamless integration across the AWS ecosystem. Moreover, it enhances your security posture by maintaining consistent permissions across different AWS accounts and applications.
For smaller organizations who need only one account, AWS IAM is a great choice, but for those big organizations or companies with multiple accounts, moving to the AWS Identity Center is the right choice.
When you set up an AWS account, you start with one primary sign-in identity, granting full access to all AWS services and resources within the account.
This identity is known as the AWS account root user and is accessible through the email address and password used during the account creation process. (However, it is strongly advised against using the root user for regular tasks. Instead, it is crucial to safeguard the root user credentials and reserve their use for tasks that specifically demand the root user’s privileges.) To begin, you need to sign in to the AWS Management Console as the account owner by choosing Root user and entering your AWS account email address. On the next page, enter your password.
APONO TIPS 1. It’s important to safeguard your root user credentials. Don’t use the root user for your everyday tasks. Instead use them to perform the tasks that only the root user can perform. 2. Remember, the AWS identity center acts as an SSO portal to all the applications in the organization—not only AWS. |
In IAM Identity Center, you have the flexibility to create users and groups directly within the system or utilize existing users and groups from Active Directory or another external identity provider. However, before IAM Identity Center can grant access permissions to users and groups within an AWS account, it needs to be informed of their existence. Likewise, applications enabled by Identity Center can interact with users and groups that are known to IAM Identity Center. Provisioning in IAM Identity Center varies based on the identity source that you use.
In order to define the permissions and policies that govern what users have access to within an AWS account, you will need to go into the admin console and configure it all from there unless you use a tool such as Apono.
APONO TIPS
Keep in mind, with the AWS IAM Identity Center, it’s possible to reuse existing AWS IAM Identity Center policies
In the Identity Center, permissions are administered through sets of permissions, which are essentially groupings of IAM policies. When a user or a group is assigned a permission set associated with an account, the Identity Center will automatically generate corresponding IAM roles within that account. These roles inherit policy configurations from the respective permission set. Furthermore, each role is equipped with a trust policy that ensures the role can only be assumed after the user has been authenticated by the federated identity provider.
One of the most important steps, though often neglected, is remembering to remove permissions once they aren’t needed anymore. This helps enforce the zero-principle security policy that states no one should have standing permissions.
AWS IAM Users are a crucial aspect of managing access and permissions within the AWS ecosystem. However, relying on long-term credentials can pose tons of risks. Utilizing AWS Organizations, AWS Identity Center, and identity federation can greatly improve the management of users and resources across multiple accounts.
By leveraging these tools and pairing them with permission management applications such as Apono, you can enhance security, streamline administration, and maintain compliance within your AWS infrastructure.
Apono integrates with AWS natively, which allows you to manage access to your S3 buckets, IAM roles and groups, EC2, EKS clusters, RDS instances and many more.
Some of the benefits of integrating AWS with Apono include the following:
To avoid the tedious task of going into the AWS Identity Center admin console every time you need to grant or revoke access, it’s important to use a tool such as Apono. With Apono, users can request and reviewers can grant permissions—without leaving Slack.
You know the frustration when you check your bank balance, and there’s another $40 charge for the gym membership you forgot to cancel. Or, more likely, you didn’t cancel it ‘just in case’ you wanted to work up a sweat sometime.
Always-on privileged access (otherwise called ‘standing privileges’) manifests similarly.
77% of organizations grant unrestricted access to employees who don’t need it, but an always-on approach doesn’t necessarily help them do their jobs. Instead, it provides opportunities for security breaches that could easily fly under the radar.
In 2022 alone, 55% of organizations suffered a cyber attack where hackers phished privileged credentials, which Verizon flagged as a critical attack vector.
This article will review why just-in-time (JIT) permission management provides the security and speed organizations need to control access. We’ll also look at the types of JIT management, what automated JIT is, and the best practices for enabling it in your business.
Just -in-time permission management or JIT (aka. Just-in-time access) is a cybersecurity practice that follows the principle of least privilege to grant users access to assets only when they need it for a limited timeframe. When time’s up, users lose access to resources such as applications and systems.
Using the JIT methodology to limit the window of time a user has access rights also limits attackers’ chances to infiltrate your cloud security perimeter.
45% of breaches in 2022 were cloud-based in light of the increasing number of applications, services, users, and resources in the cloud, making just-in-time permission management (JIT) a must-have. While traditional PAM processes (e.g., session management) succeed as a network-based access solution for on-premises environments, JIT is ideal for controlling access across cloud resources.
Assigning JIT permissions manually is like playing a game of whack-a-mole – requests pop up all the time across your organization, and you have to respond at lightning speed to avoid disgruntled colleagues.
59% of organizations fail to deploy zero trust due to resource constraints, so can you realistically dedicate time and personnel to granting and revoking access all day?
In contrast, automated JIT platforms help relieve friction caused by manual permission management by validating, monitoring, and revoking access without human intervention. Automated JIT platforms have features like auto-expiring permissions and reporting capabilities, enabling users to self-serve permission requests without compromising your organization’s security posture. Putting permission management in the hands of an automated JIT platform prevents human error to minimize the attack surface, eliminates bottlenecks, and ultimately helps maintain productivity.
As well as taking a weight off your IT and security teams’ shoulders, automated JIT has many other benefits.
According to IBM’s Cost of a Data Breach report, compromised or stolen credentials were the most common attack vector in 2022. Automated JIT drastically reduces the risk of privilege abuse and breached identities by eliminating the need for standing privileges.
Say goodbye to manual review cycles, wait times, and human error with an automated JIT approval workflow. You can grant access at scale to suit each task at hand, which would make a massive dent in your operational efficiency without automated JIT.
With automated JIT, you can satisfy compliance and customer requirements like SOC2 by enforcing zero trust, least privilege access, and auditing all privileged access activities. Automated JIT platforms can include auditing and reporting features to help you gain visibility over all sessions and privileges.
Here are seven best practices you can follow when enabling and implementing JIT access.
First, it’s time to take stock. Begin by identifying the accounts and assets with the most privileges that pose the highest risk, usually those belonging to administrators. You can implement JIT access control to these accounts first, then work your way down the chain.
You can use role-based access control (RBAC) and attribute-based access control (ABAC) as supplementary solutions to define granular policies and circumstances for elevated access. RBAC and ABAC can help you categorize accounts and differentiate the rights they need, then create a control policy that users must meet to receive access.
As well as defining policies for justification-based access, you can create criteria for users that request temporary access, such as which accounts are valid and the duration of access. You can also implement time-based controls. For example, granting access to specific resources during pre-defined days and times.
An automated access management solution provides visibility over your operations by logging all access activities and enabling alerts responding to dodgy behavior. You can also record and log JIT privileged access. A (digital) paper trail is essential for auditing, governance, and compliance with regulations like SOC2 and PCI-DSS.
You’ll need to delegate responsibilities to employees and decide who will review permission requests. Training employees on how and when to grant or revoke access is essential to minimize incident risk, especially in moments like ‘break glass’ and ‘on-call’. This is where automated JIT comes in handy. It helps you configure ‘break glass’ and ‘on-call’ access flows to resolve incidents and remove bottlenecks like waking DevOps staff.
You should manually rotate credentials regularly to invalidate them, so hackers cannot use a password even if they get their hands on it. You can do this in a centralized vault that, of course, requires the highest security clearance possible.
The best way to simplify cloud access management is to use a solution like Apono that enforces an automated JIT approach. You can minimize friction, remove over-privileges, and prevent permissions from slipping through the cracks by using Apono to suggest automated JIT access. Standing privileges will never put your organization at risk again.
40% of IT and security professionals say that cloud security is their top priority in 2023. 38% point specifically to identity and access management, and 25% say zero trust, signaling the final nail in the coffin for standing privileges. JIT access provides an effective solution for organizations of all sizes to improve their security posture and remain continuously compliant without impeding productivity.
With Apono’s cloud-native permission management platform, you can automate permission granting for your entire stack based on organizational context and approval workflows. So you don’t need to spend time manually provisioning access. Apono integrates seamlessly with your cloud environments for a smooth user experience and streamlines compliance and customer requirements.
When you follow the principle of least privilege, you grant users just enough access so that they can carry out everyday activities, but can do nothing more. Following this principle helps you reduce risk. However, it can create friction for users when they occasionally need to perform a privileged action—such as dealing with an unexpected incident.
The problem is that most companies are unaware of the solutions available to them—so much so that they are spending manpower slapping together internally made solutions for automating access requests to GCP. In this article, we’ll discuss the three ways you can build JIT access to GCP.
As an admin on a large project, it’s common to be inundated with requests from users asking for access to a project. This process is filled with inefficiencies – the admin doesn’t really know how to handle these requests, so they forward the request to the project lead, asking for their approval. Once the project lead approves, then the admin has to manually add them to the right group or project role, enabling them access to the project.
For companies who want to benefit from JIT access to Google Projects, there are three options available. They are the following:
At Apono, we’ve spoken with many companies who, for lack of a better option, built their own internal solutions for requesting access. One such company, Mednition, recently published an article about it. You can find it in whole here.
The company had a few goals:
Here’s the solution the company created:
“We decided to create a Slack bot that runs within GCP Cloud Run and logs the audit trail to GCP Cloud Logging. We will leverage Google Groups for provisioning access and Cloud Identity as the mechanism for managing temporary membership. Cloud Identity will add and remove the user from the group for us, so we don’t have to manage any state (which is amazing to avoid sync issues and edge cases). This is particularly interesting because now we can provide temporary access to third party applications if they can map access to Google Groups (outside of our use case but maybe in the future).”
Google’s Just-In-Time Access is an open source application that lets you implement just-in-time privileged access to Google Cloud resources. The application lets administrators, users, and auditors do the following tasks:
They can then activate one or more roles and provide a justification for getting access:
After a user has activated a role, Just-In-Time Access grants the user temporary access to the project.
To protect the application against unauthorized access, the Just-In-Time Access application can be accessed only over Identity-Aware Proxy (IAP). Using IAP, an administrator can control which users should be allowed to access Just-In-Time Access, and which additional conditions those users must satisfy in order to get access.
Solutions such as Apono provide plug-and-play authorization workflows so that companies don’t need to start from scratch. Apono serves as the intermediary that connects identities with entitlements, enabling access on a just-in-time, temporary basis. Apono’s privilege authorization capability provides a reliable and streamlined approach to permission management and mitigates the consequences of a GCP permissions-related breach, without compromising user experience and productivity.
The image below features an access flow that allows developers to get temporary read-only access to production when needed.
Remember that managing access control effectively is a critical aspect of maintaining the security and integrity of your GCP resources. Regularly review and audit your roles and permissions to ensure they align with your evolving requirements.
Here’s a brief overview of everything you need to know.
The Google Cloud Platform (GCP) provides a robust system for managing access and permissions through roles. Roles in GCP allow you to control what actions users and services can perform within your projects and resources. In this beginner’s guide, I’ll provide an overview of GCP roles and their usage.
1. Understanding GCP IAM: Google Cloud Identity and Access Management (IAM) is the central service that controls access to GCP resources. IAM manages permissions and roles across projects, allowing you to grant and revoke access as needed.
2. Predefined Roles: GCP offers a set of predefined roles with specific permissions. These roles are designed to cover common use cases and provide a level of granular access control. Some of the commonly used predefined roles include:
– Owner: Has full control over the project, including managing roles and billing.
– Editor: Can view and modify project resources but cannot manage roles or billing.
– Viewer: Can view project resources but cannot make any modifications.
– Billing Account Administrator: Has access to billing information and can manage billing accounts.
– Compute Instance Admin: Can manage compute instances but not other resources.
– Storage Object Viewer: Can view objects within Cloud Storage buckets.
These predefined roles are useful starting points for managing access, but they may not always provide the precise level of control needed for your specific requirements.
3. Custom Roles: GCP allows you to create custom roles tailored to your specific needs. Custom roles offer fine-grained control over permissions by selecting individual actions or setting broad permissions within a particular service. You can define custom roles at the project or organization level and assign them to users or groups as required.
4. Role Hierarchy: Roles in GCP follow a hierarchy where higher-level roles inherit permissions from lower-level roles. For example, the `roles/owner` role inherits all permissions from the `roles/editor` role, which, in turn, inherits permissions from the `roles/viewer` role. Understanding this hierarchy is crucial for managing roles effectively and avoiding unnecessary duplication.
5. Service Accounts: Service accounts are special accounts used by applications, virtual machines, and other services to interact with GCP resources. Similar to user accounts, service accounts can be assigned roles to determine their permissions. It’s important to grant the minimum required permissions to service accounts to reduce the risk of unauthorized access.
6. Granting Roles: Roles can be assigned at various levels, including the project, folder, or organization level. You can grant roles to individual users, groups, or service accounts. It’s recommended to follow the principle of least privilege, granting only the necessary permissions for users to perform their tasks.
7. Testing Access: GCP provides a tool called the IAM & Admin Policy Simulator, which allows you to test the effectiveness of your roles and permissions without making actual changes to access control settings. This tool helps ensure that users have the right level of access without exposing sensitive resources.
Just-in-time database access is about managing access to specific databases. It has a lot of moving parts and may seem complicated, but there are things that can be done that make it much easier.
In this blog, we’ll explore roles and how access management to databases works today, why direct access to databases is needed, what an agile approach to access management is, and the ways that just-in-time database access makes the whole process much easier and safer.
In the database world, a role is a group of privileges that can be assigned to one or more users, and a user can have one or more roles assigned to him.
There are two ways to provision access today. Users have the option to manage identities directly inside the database, or they can connect them to existing identity sources like Active Directory or Okta.
Problems with the above
Companies strive to limit direct access to databases for many reasons, such as security, as a fail-safe against potential human errors and a number of other benefits. No matter the reason, this type of least-privilege access policy has led to a rise in popularity for BI tools such as BigQuery and Elasticsearch. These tools allow analysts or devs the opportunity to access the data without directly accessing the production environment.
Although it’s prudent to follow the zero-trust approach, there are still times when it’s necessary for engineers to enter the production environment immediately, for example, during incidents. Without immediate access to fix things in production, problems can persist and the MTTR rises, resulting in lost time, lost resources and lost money.
A few examples are the following:
Just-in-Time Database Access
Provisioning Just-in-Time (JIT) database access enables users to obtain temporary, on-demand privileged access to resources. This approach falls under identity access management or privileged access management and is particularly designed to address scenarios where certain users may not regularly require access to specific applications or services. However, they can gain timely access to these resources when necessary, but only for a limited duration.
Compared to the concept of standing privileges, which grants users constant, broad access to resources, Just-in-Time database access provisioning takes a different approach. It ensures that all access is strictly temporary and granted only to the required resources, what’s called, “Just Enough” access.
Furthermore, this system typically limits access based on roles, aligning with the principle of least privilege (POLP), a requirement in many policies and regulations. The core idea behind Just-in-Time access is to eliminate permanent authorizations or unending access to critical infrastructures, gaining increasing momentum in the security landscape but is also very helpful in maintaining a more stable, yet agile production environment.
By default, Just-in-Time database access makes all access temporary, consistently verifying the validity of users, connections, roles, and privilege levels during every connection establishment. This approach effectively removes implicit trust from the equation, aligning with the fundamental philosophy of the Zero Trust framework – “never trust, always verify.”
Overall, just-in-time access is a powerful approach to access management that not only increases security but also supports incident response efforts and regulatory compliance.
About Apono
Apono helps you grant just in time and just enough access to your mission critical and highly sensitive databases while eliminating the need for long-term privileged access.
Permission management for databases is a sore spot in many DevOps pipelines.
It requires a careful balancing act between access control and productivity. On one hand, privileged access exposes the organization to risks. On the other hand, if we restrict it too much, we end up with bottlenecks resulting in a lack of productivity. It’s a lose-lose situation.
Companies strive to limit direct access concerning permission management for databases for many reasons, such as security, as a fail-safe against potential human errors and a number of other benefits. No matter the reason, this type of least-privilege access policy has led to a rise in popularity for BI tools such as BigQuery and Elasticsearch. These tools allow devs the opportunity to access the data without entering the production environment.
Although it’s prudent to follow the zero-trust approach, there are still times when it’s necessary for engineers to enter the production environment immediately, for example, during incidents. Without immediate access to fix things in production, problems can persist and the MTTR rises, resulting in lost time, lost resources and lost money.
A few of examples are the following:
It’s clear there needs to be a solution to the chaos, and there are two approaches to achieve this. They are the following:
Just-in-Time access grants users just in time and just enough access to mission critical and sensitive databases. It eliminates the need for long-term privileged access for users and reduces the attack surface.
It is evident that when it comes to permission management for databases, we must establish order amidst chaos, and luckily there are ways to create a win-win permission management situation when it comes to provisioning database access.
At Apono, we constantly hear from customers how difficult it is to set up granular permissions with F5, so we decided to dive in and see what’s so frustrating. We found a total of 6 issues. Check them out below.
F5 is a company specializing in application security, multi-cloud management, online fraud prevention, application delivery networking (ADN), application availability & performance, network security, and access and authorization.
Luckily, there are ways to overcome these obstacles. Read on below to find out how.
2. Automated Access Workflows. Setting up automatic workflows saves manual labor and time. They also allow for break-glass scenarios when needed, which decreases MTTR and downtime.
3. Self-Serve Permissions. Self-service access workflows allow you to create a process for your users to request access to datasets quickly and easily.
4. Resource-based access policies. Apono allows you to grant permissions to granular-level resources such as down to specific namespaces.
“Apono allows us to generate temporary permissions upon request on a very granular set of restrictions, delivering huge value to the business by reducing that Excel phase and optimizing the day-to-day work of multiple teams, including the R&D operations and security teams.”
As you can see, creating on-demand, granular-level permission access policies is impossible to do with just F5 alone. In order to benefit from JIT with F5, it’s imperative that you use a tool that can handle dynamic context. In other words, F5 permission management doesn’t have to suck anymore.
MySQL is a database application for Linux and part of the popular LAMP stack (Linux, Apache, MySQL, PHP). A MySQL installation includes options of managing through a root user or specific user accounts.
Managing user credentials in MySQL can be a time-consuming task, particularly when dealing with numerous MySQL instances spread across multiple servers.
In this article, we’ll be reviewing how to do the following:
Once you have MySQL installed on the server(s) that will host your MySQL environment, you need to create a database and additional user accounts. In order to run the following commands, log into the MySQL instance with the MySQL root account.
Creating a MySQL database involves a few simple steps. Here’s a step-by-step guide to creating a new MySQL database:
1. Create a New Database:
Now that you are connected to MySQL, you can create a new database using SQL commands. In the MySQL command-line client or phpMyAdmin, use the following SQL statement to create a new database (replace “Apono_database” with the desired name of your database):
CREATE DATABASE Apono_database;
2. Verify the Database Creation:
To ensure that the database was created successfully, you can check the list of databases. In the MySQL command-line client, use the following command:
SHOW DATABASES;
3. Use the New Database (Optional):
If you want to work with the newly created database, you need to switch to it using the following command in the MySQL command-line client:
USE Apono_database;
That’s it! You have now successfully created a MySQL database. You can start creating tables and inserting data into it to build your application or manage your data. Remember to handle database credentials and access permissions with care to maintain security.
Creating a MySQL database involves a few simple steps. Here’s a step-by-step guide to creating a new MySQL database:
1. Install MySQL:
If you don’t have MySQL installed on your system, you need to install it first. You can download the MySQL Community Server from the official MySQL website: https://dev.mysql.com/downloads/
2. Start the MySQL Server:
Once you have MySQL installed, start the MySQL server. The process for starting the server varies depending on your operating system. On most systems, you can start the server using a command or by starting the MySQL service.
3. Connect to MySQL:
After the server is running, you need to connect to it using the MySQL command-line client or a graphical tool like phpMyAdmin.
– For the command-line client, open a terminal or command prompt and type:
mysql -u root -p
You will be prompted to enter the MySQL root password.
– For a graphical tool like phpMyAdmin, open a web browser and navigate to the phpMyAdmin URL. You can log in using your MySQL root credentials.
4. Create a New Database:
Now that you are connected to MySQL, you can create a new database using SQL commands. In the MySQL command-line client or phpMyAdmin, use the following SQL statement to create a new database (replace “Apono_database” with the desired name of your database):
CREATE DATABASE Apono_database;
5. Verify the Database Creation:
To ensure that the database was created successfully, you can check the list of databases. In the MySQL command-line client, use the following command:
SHOW DATABASES;
6. Use the New Database (Optional):
If you want to work with the newly created database, you need to switch to it using the following command in the MySQL command-line client:
USE Apono_database;
That’s it! You have now successfully created a MySQL database. You can start creating tables and inserting data into it to build your application or manage your data. Remember to handle database credentials and access permissions with care to maintain security.
To grant permissions in MySQL, you’ll need to have administrative privileges or the GRANT OPTION privilege on the database you want to modify. Here are the steps to grant permissions to a user in MySQL:
1. Connect to MySQL: Open a terminal or command prompt and connect to MySQL using a user account with administrative privileges. For example:
mysql -u root -p
You will be prompted to enter the password for the ‘root’ user or the administrative user you provided.
2. Select the database: If you want to grant permissions for a specific database, first select it using the following command:
USE Apono_database;
3. Grant the permissions: Now, you can grant various privileges to the user using the `GRANT` statement. The basic syntax is as follows:
GRANT privilege_type ON database_name.table_name TO 'user'@'host';
Replace `privilege_type` with the specific privileges you want to grant. Here are some common privileges:
– `SELECT`: Allows the user to read (SELECT) data from tables.
– `INSERT`: Allows the user to insert new rows into tables.
– `UPDATE`: Allows the user to modify existing rows in tables.
– `DELETE`: Allows the user to remove rows from tables.
– `CREATE`: Allows the user to create new tables or databases.
– `DROP`: Allows the user to delete tables or databases.
– `ALL PRIVILEGES`: Grants all privileges on the specified objects.
Replace `database_name.table_name` with the specific database and table (or `*` for all tables) where you want to grant the privileges.
Replace `’user’@’host’` with the username and the host from which the user will connect. For example, `’john’@’localhost’` refers to the user ‘john’ connecting from the same machine as the MySQL server.
For example, to grant SELECT, INSERT, UPDATE, and DELETE privileges on all tables of a database called ‘exampledb’ to a user ‘exampleuser’ connecting from ‘localhost’, you would use the following command:
GRANT SELECT, INSERT, UPDATE, DELETE ON exampledb.* TO 'exampleuser'@'localhost';
4. Apply the changes: After executing the `GRANT` statement, you need to apply the changes for them to take effect:
FLUSH PRIVILEGES;
5. Exit MySQL: When you’re done granting permissions, exit the MySQL command line interface by typing:
EXIT;
The user ‘exampleuser’ should now have the specified privileges on the ‘exampledb’ database or the specified tables within it. Make sure to grant the appropriate permissions based on your application’s requirements to ensure security and access control.
To revoke permissions in MySQL, you can use the `REVOKE` statement. This allows you to remove specific privileges from a user or role. Here’s how you can do it:
1. Connect to MySQL: Open a terminal or command prompt and connect to MySQL using a user account with administrative privileges. For example:
mysql -u root -p
You will be prompted to enter the password for the ‘root’ user or the administrative user you provided.
2. Select the database: If you want to revoke permissions for a specific database, first select it using the following command:
USE Apono_database;
3. Revoke the permissions: Now, you can revoke specific privileges from the user using the `REVOKE` statement. The basic syntax is as follows:
REVOKE privilege_type ON database_name.table_name FROM 'user'@'host';
Replace `privilege_type` with the specific privileges you want to revoke. These should match the privileges you previously granted to the user. For example, if you previously granted SELECT, INSERT, UPDATE, and DELETE privileges, you would use the same list of privileges in the `REVOKE` statement.
Replace `database_name.table_name` with the specific database and table (or `*` for all tables) from which you want to revoke the privileges.
Replace `’user’@’host’` with the username and the host from which the user was connecting. For example, `’john’@’localhost’` refers to the user ‘john’ connecting from the same machine as the MySQL server.
For example, to revoke SELECT, INSERT, UPDATE, and DELETE privileges on all tables of a database called ‘exampledb’ from a user ‘exampleuser’ connecting from ‘localhost’, you would use the following command:
REVOKE SELECT, INSERT, UPDATE, DELETE ON exampledb.* FROM 'exampleuser'@'localhost';
4. Apply the changes: After executing the `REVOKE` statement, you need to apply the changes for them to take effect:
FLUSH PRIVILEGES;
5. Exit MySQL: When you’re done revoking permissions, exit the MySQL command line interface by typing:
EXIT;
That’s it! The user ‘exampleuser’ should no longer have the specified privileges on the ‘exampledb’ database or the specified tables within it. Make sure to carefully revoke only the permissions that are no longer necessary, to maintain proper access control and security.
You should now be able to create, modify, delete users and grant permissions in a MySQL database.
Remember, to improve security and limit accidental damage it’s important to limit users only to the privileges required for their jobs.
Check out our article about Just-in-Time Access to Databases.
When enabling MongoDB Authentication post-set up, it’s important to do the following things if you want to avoid downtime.
Organizations must strike a balance between enabling employees to be productive and efficient while ensuring that access to sensitive information and resources is adequately protected.
On one hand, productivity is crucial for organizations to remain competitive and achieve their goals. Employees need access to various systems, data, and tools to perform their tasks efficiently. Restricting access too much or implementing overly stringent security measures can hinder productivity and impede workflow.
On the other hand, permission security is necessary to safeguard sensitive information, prevent unauthorized access, and mitigate the risk of data breaches or other security incidents. Organizations need to implement access controls, user permissions, and authentication mechanisms to ensure that only authorized individuals can access specific resources. These security measures help protect confidential data, intellectual property, and other critical assets from unauthorized use or disclosure.
Finding the right balance between productivity and permission security involves careful consideration and risk assessment.
When using a VPN or VPC, it’s easy to think that you don’t need any authentication or ways to enable authorization. After all, it is a private network. And, when it comes to productivity, manual provisioning takes time and needs constant oversight. It’s easy to see why so many companies choose to forgo security in favor of productivity.
However, as these companies grow, they need to implement security measures and be compliant–all without interrupting productivity.
“Companies want to restrict access; they don’t want everyone to have access without a user password. Rather, companies want to make sure that people have different levels of access without hurting the R&D productivity.”
– Rom Carmel,
CEO and Founder, Apono
For the many companies that didn’t set up authorization in MongoDB at the very beginning, it’s not too late. Setting it up after transitioning to the cloud is not impossible, but it does take some know-how.
It’s not only people who need to access MongoDB to be affected; it’s also applications. So, in effect, none of the applications can access that MongoDB either.
Enabling authorization post-set up will break how teams work unless it’s done smartly.
It’s important to run the mongoDB with TransitiontoAuth enabled, which allows for accepting of both authenticated and unauthenticated connections. Clients connected to the mongoDB during the transition state can perform read, write, and administrative operations on any database, so it’s important to remember to disable the feature after transitioning.
This transition puts the company in an in-between state that allows access two ways: ones with user password connection strings and ones without. So, essentially you’re not really restricting any access because you can access it also without the user password. However, you are now starting to log and can therefore see who hasn’t moved to using a password yet.
“You don’t want to just disable it before you’ve seen that all the people and applications who use and access that MongoDB have shifted to moving to working the new way.”
Rom Carmel
Without Apono, it’s necessary for companies to create their own users and their own policies to these. But with Apono, they don’t need to do that. They can ask for what they need, and it’s automatically granted. How? When someone asks permission for a user, Apono goes inside the Mongo, creating a policy that will fit those needs, and giving the requestor a user. Then that user can be utilized to connect the model when the authentication is turned off.
Our team had an amazing time at Kubecon Amsterdam, connecting with DevOps and developers from around the world and showcasing our permissions management automation platform—Apono.
We were thrilled to see the excitement and interest in our solution, as attendees recognized the need for better permission management in their organizations—from a security, time-saving and compliant perspective.
In addition to showcasing Apono at the conference, we also held several technical sessions to help attendees learn how to automate permission management. These sessions covered a range of topics, from gaining access visibility in K8s clusters to creating incident response access flows for on-call groups.
A few take-ways from our team at Kubecon:
“I realized that even a developer can sell when his product brings real value.”
Dima – Software Team Lead @ Apono
“I learned that the best way of getting someone attention is with a nerf-gun, hacky sacks and old-school arcade games – I’m definitely in the right industry”
Roey – Head of Marketing @ Apono
“I discovered that everyone needs Apono, it does no matter if it is a big company or small, as long as you need to keep your customer data safe, you will need JIT permissions to all your cloud assets, DBs, K8s, and R&D applications.”
Tamir – Senior Director of Technical Services @ Apono
“I learned that you can put thousands of developers in one room and no tragedy will happen. Also, there is no such thing as too many socks!”
Ofir – CTO & Co-founder @ Apono
“Kubernetes is everywhere and permissions in K8s is a tedious thing to manage, especially when trying to do it “right” and granular. Even just understanding who has what permissions in a cluster is not as straightforward as one might think”
Rom (CEO & Co-founder @ Apono)
Overall, Kubecon Amsterdam was a fantastic opportunity to meet like-minded individuals who share our passion for innovation for k8s in the cloud-native space.
We look forward to continuing to develop solutions that help organizations better manage their cloud environments and improve their security posture.