While most engineering tooling at DoorDash is focused on making safe incremental improvements to existing systems, in part by testing in production (learn more about our end-to-end testing strategy), this is not always the best approach when launching an entirely new business line. Building from scratch often requires faster prototyping and customer validation than incremental improvements to an existing system. In the New Verticals organization at DoorDash, we are launching and growing new categories such as alcohol and other regulated goods, health, retail, convenience, and grocery. Often we're going from zero to one. We needed to move quite fast during one recent expansion of our business, which required a local development experience that could keep up. In this article we will provide some context and then explain how we were able to speed up our development by enabling easy local development with PostgreSQL
S'écarter de l'environnement de développement typique de DoorDash
Idéalement, l'infrastructure et les exigences sont déjà en place lorsque nous développons un microservice backend, ce qui est généralement le cas pour les nouvelles applications chez DoorDash. Des exigences concrètes et une infrastructure existante simplifient le chemin pour que les environnements de développement s'intègrent facilement et en toute sécurité, éliminant ainsi une partie du besoin d'itération rapide parce que la conception de l'application peut être chargée en amont sur la base des exigences. Cette stabilité existante permet d'éviter les comportements inattendus au sein de l'application, ce qui garantit la sécurité des déploiements.
However, this entirely new microservice could not be built on any existing infrastructure for compliance reasons. Instead, we had to develop our application in parallel with infrastructure planning and spin-up. As backend developers, we needed to stay unblocked while the infrastructure - in this case AWS resources - was being created. Our backend had to be developer-friendly and allow the team to iterate rapidly to deal with evolving requirements and work independently on separate tasks without the testing interrupting anyone's work. With the amorphous nature of the task, the typical DoorDash local development environment approach was not suitable.
Créer une nouvelle approche du développement local
To kick off the creation of a local dev environment, we first had to take stock of our desired infrastructure as well as our available tooling and resources before charting out how to set up the environment quickly and efficiently. We knew we'd be deploying a Docker container to Fargate as well as using an Amazon Aurora PostgreSQL database and Terraform to model our infrastructure as code. It was fair to assume that we would use other AWS services, particularly SQS and AWS Secrets Manager.
One local development approach would have been to mock, or create dummy versions of our cloud resources. Local mocks may work well under some circumstances, but it's difficult to be fully confident in the final end-to-end experience of an application because the mocks may be incorrect, lack important features, or ultimately have unanticipated behaviors.
Compte tenu de ces considérations, nous avons élaboré une stratégie d'architecture de notre environnement de développement local afin de maximiser les compromis entre la vitesse de développement, la facilité d'utilisation et le degré de similitude avec la production. Nous avons divisé cette stratégie en quatre étapes :
- Utiliser Docker Compose pour notre application Docker et toutes ses ressources nécessaires.
- Mettre en place une base de données PostgreSQL conteneurisée fonctionnant localement.
- Utilisez LocalStack pour activer les ressources AWS exécutées localement.
- Utiliser Terraform pour créer des ressources AWS cohérentes dans LocalStack.
Comprendre les compromis
Notre approche du développement local comporte un certain nombre d'avantages et d'inconvénients :
Pour :
- Il est rapide et facile pour toute personne nouvellement impliquée dans le projet de faire fonctionner l'environnement de développement local.
- Des environnements de développement locaux cohérents entre les machines et entre les démarrages d'environnement.
- Aucun risque de toucher accidentellement aux données ou aux systèmes de production.
- Il est facile de faire évoluer l'infrastructure souhaitée et d'ajouter de nouvelles capacités d'application.
- Aucune infrastructure n'est requise dans le nuage.
- Pas de longues attentes au démarrage. Les premières exécutions nécessitent un peu de temps supplémentaire pour télécharger les images Docker, mais chaque démarrage ultérieur devrait être rapide.
- Le tout maîtrisé dans le code, avec l'application et son environnement complet cartographiés via Docker Compose et Terraform.
- Cadre de microservice dorsal et agnostique en termes de langage.
Cons :
- Because it's not actually running in production, the final result may not be an entirely accurate reflection of how the microservice ultimately will perform in production.
- Comme il n'y a pas d'interaction avec les ressources ou les données de production, il peut être difficile de créer les données fictives nécessaires pour refléter avec précision tous les scénarios de test.
- L'ajout de microservices maison supplémentaires qui ont leurs propres dépendances peut ne pas être simple et peut devenir difficile à gérer.
- As the application and infrastructure grows, running everything locally may become a resource drain on an engineer's machine.
- Certains services AWS de LocalStack n'ont pas une parité de fonctionnalités 1:1 avec AWS. En outre, certains nécessitent un abonnement payant.
The bottom line is that this local development approach lets new developers get started faster, keeps the environment consistent, avoids production mishaps, is easy to iterate on, and can be tracked via Git. On the other hand, generating dummy data can be difficult and as the application's microservice dependency graph grows, individual local machines may be hard-pressed to run everything locally.
Nous détaillons ci-dessous chaque élément de notre approche. Si vous utilisez votre propre application, remplacez les détails de votre mise en œuvre par ceux de notre application, y compris les éléments tels que Node.js, Typescript, les services AWS et les noms des variables d'environnement.
Cloner le projet d'exemple
Let's get started by checking out our example project from GitHub. This project has been set up according to the instructions detailed in the rest of this post.
Exemple de projet : doordash-oss/local-dev-env-blog-example
In this example, our backend application has been built using TypeScript, Node.js, Express, and TypeORM. You're not required to use any of these technologies for your own application, of course, and we won't focus on any specifics related to them.
This example project is based on an application that exposes two REST endpoints - one for creating a note and another for retrieving one.
POST /notes
GET /notes/:noteid
Lors de la publication d'une note, nous envoyons également un message à une file d'attente SQS. Actuellement, rien n'est fait avec ces messages dans la file d'attente, mais à l'avenir, nous pourrions câbler un consommateur pour la file d'attente afin de traiter ultérieurement les notes de manière asynchrone.
Installez les paquets prérequis pour que le projet d'exemple démarre. Notez que ces instructions se trouvent également dans le README du projet.
- Node version >= 16.13 but <17 installed.
- https://nodejs.org/en/download/
- Docker Desktop installed and running.
- https://www.docker.com/products/docker-desktop/
- postgresql installed.
- `brew install postgresql`
- awslocal installed.
- https://docs.localstack.cloud/integrations/aws-cli/#localstack-aws-cli-awslocal
- Run npm install
- `npm install`
Configurez Docker Compose avec votre application
Docker Compose est un outil permettant de définir et d'exécuter des environnements Docker multi-conteneurs. Dans ce cas, nous exécuterons notre application dans un conteneur et en utiliserons quelques autres pour simuler notre environnement de production aussi précisément que possible.
1. Commencez par configurer l'application pour qu'elle s'exécute via Docker Compose. Tout d'abord, créez un fichier Docker, qui décrit comment le conteneur doit être construit.
dockerfiles/Dockerfile-api-dev
FROM public.ecr.aws/docker/library/node:lts-slim
# Create app directory
WORKDIR /home/node/app
# Install app dependencies
# A wildcard is used to ensure both package.json AND package-lock.json are copied
COPY package*.json ./
COPY tsconfig.json ./
COPY src ./src
RUN npm install --frozen-lockfile
# We have to install these dev dependecies as regular dependencies to get hot swapping to work
RUN npm install nodemon ts-node @types/pg
# Bundle app source
COPY . .
Ce fichier Docker contient des étapes spécifiques à Node.js. À moins que vous n'utilisiez également Node.js et TypeORM, le vôtre sera différent. Pour plus d'informations sur la spécification Dockerfile, vous pouvez consulter la documentation Docker ici.
2. Ensuite, créez un fichier docker-compose.yml et définissez le conteneur d'application.
docker-compose.yml
version: '3.8'
services:
api:
container_name: example-api
build:
context: .
dockerfile: ./dockerfiles/Dockerfile-api-dev
ports:
- '8080:8080'
volumes:
- ./src:/home/node/app/src
environment:
- NODE_ENV=development
command: ['npm', 'run', 'dev']
Here we have defined a service called api that will spin up a container named example-api that uses the Dockerfile we previously defined as the image. It exposes port 8080, which is the port our Express server starts on, and mounts the ./src directory to the directory /home/node/app/src. We're also setting the NODE_ENV environment variable to development and starting the application with the command npm run dev. You can see what npm run dev does specifically by checking out that script in package.json here. In this case, we're using a package called nodemon which will auto-restart our backend Node.js express application whenever we make a change to any TypeScript file in our src directory, a process that is called hotswapping. This isn't necessary for your application, but it definitely speeds up the development process.
Mise en place d'une base de données locale
Most backend microservices wouldn't be complete without a database layer for persisting data. This next section will walk you through adding a PostgreSQL database locally. While we use PostgreSQL here, many other databases have Docker images available, such as CockroachDB or MySQL.
1. First, we'll set up a PostgreSQL database to be run and connected to locally via Docker Compose.
Ajouter un nouveau service PostgreSQL au fichier docker-compose.yml.
docker-compose.yml
postgres:
container_name: 'postgres'
image: public.ecr.aws/docker/library/postgres:14.3-alpine
environment:
- POSTGRES_USER=test
- POSTGRES_PASSWORD=password
- POSTGRES_DB=example
ports:
- '5432:5432'
volumes:
- ./db:/var/lib/postgresql/data
healthcheck:
test: ['CMD-SHELL', 'pg_isready -U test -d example']
interval: 5s
timeout: 5s
retries: 5
Here we have defined a service and container called postgres. It uses the public PostgreSQL 14.3 image because we don't need any customization. We've specified a few environment variables, namely the user and password needed to connect to the database and the name of the database. We're exposing the default PostgreSQL port 5432 locally and using a local folder named db for the underlying database data. We've also defined a health check that checks that the example database is up and accessible.
Nous pouvons maintenant y connecter notre application en ajoutant les variables d'environnement pertinentes qui correspondent à nos identifiants de base de données configurés.
docker-compose.yml
api:
container_name: example-api
build:
context: .
dockerfile: ./dockerfiles/Dockerfile-api-dev
ports:
- '8080:8080'
depends_on:
postgres:
condition: service_healthy
volumes:
- ./src:/home/node/app/src
environment:
- NODE_ENV=development
- POSTGRES_USER=test
- POSTGRES_PASSWORD=password
- POSTGRES_DATABASE_NAME=example
- POSTGRES_PORT=5432
- POSTGRES_HOST=postgres
command: ['npm', 'run', 'dev']
One interesting thing to note about connections between containers in a Docker Compose environment is that the hostname you use to connect to another container is the container's name. In this case, because we want to connect to the postgres container, we set the host environment variable to be postgres. We've also specified a depends_on section which tells the example-api container to wait to start up until the health check for our postgres container returns successfully. This way our application won't try to connect to the database before it is up and running.
2. Now we'll seed the database with some data whenever it starts up.
If you're testing your application in any way, it's probably useful to have a local database that always has some data. To ensure a consistent local development experience across docker-compose runs and across different developers, we can add a Docker container which runs arbitrary SQL when docker-compose starts.
Pour ce faire, nous commençons par définir un script bash et un fichier SQL comme indiqué ci-dessous.
scripts/postgres-seed.sql
-- Add any commands you want to run on DB startup here.
CREATE EXTENSION IF NOT EXISTS "uuid-ossp";
CREATE TABLE IF NOT EXISTS notes (
id UUID NOT NULL DEFAULT uuid_generate_v4(),
contents varchar(450) NOT NULL,
created_at TIMESTAMP WITHOUT TIME ZONE DEFAULT now(),
updated_at TIMESTAMP WITHOUT TIME ZONE DEFAULT now()
);
-- Since data is kept between container restarts, you probably want to delete old inserted data so that you have a known state everytime the the database starts up
DELETE FROM notes;
INSERT INTO notes (id, contents) VALUES ('6a71ff7e-577e-4991-bc70-4745b7fbbb78', 'Look at this lovely note!');
This is just a simple SQL file that creates a database table called "notes" and inserts a note into it. Note the use of IF NOT EXISTS and the DELETE, which ensure that this script will always execute successfully, whether it's run after the database is first created or multiple times after.
scripts/local-postgres-init.sh
#!/bin/bash
export PGPASSWORD=password; psql -U test -h postgres -d example -f /scripts/postgres-seed.sql
Ce fichier bash exécute notre script postgres-seed.sql sur notre base de données.
Ensuite, définissez le service Docker et le conteneur dans docker-compose pour exécuter le script et le code SQL.
docker-compose.yml
postgres-init:
container_name: postgres-init
image: public.ecr.aws/docker/library/postgres:14.3-alpine
volumes:
- './scripts:/scripts'
entrypoint: '/bin/bash'
command: ['/scripts/local-postgres-init.sh']
depends_on:
postgres:
condition: service_healthy
Cela lance un conteneur nommé postgres-init qui exécute notre script bash ci-dessus. Comme notre application, il attend pour démarrer que notre conteneur de base de données soit lui-même opérationnel.
Speaking of our application, let's also make sure that it waits for our database to be seeded.
docker-compose.yml
api:
container_name: example-api
build:
context: .
dockerfile: ./dockerfiles/Dockerfile-api-dev
ports:
- '8080:8080'
depends_on:
postgres:
condition: service_healthy
postgres-init:
condition: service_completed_successfully
volumes:
- ./src:/home/node/app/src
environment:
- NODE_ENV=development
- POSTGRES_USER=test
- POSTGRES_PASSWORD=password
- POSTGRES_DATABASE_NAME=example
- POSTGRES_PORT=5432
- POSTGRES_HOST=postgres
command: ['npm', 'run', 'dev']
Mise en place de LocalStack
If you're taking full advantage of AWS, your local development environment likely wouldn't be complete without access to the AWS services you rely on - or at least mocks of them. LocalStack lets you run many of your AWS resources locally alongside your application, ensuring test data is always separated from the rest of your team while maintaining an application environment that's as close to prod as possible.
1. Tout d'abord, configurez LocalStack pour qu'il fonctionne avec Docker Compose.
Just like our database or application, we define a LocalStack service and container in our docker-compose.yml file. The configuration we're using is based on the recommended configuration from LocalStack.
docker-compose.yml
localstack:
container_name: 'localstack'
image: localstack/localstack
ports:
- '4566:4566'
environment:
- DOCKER_HOST=unix:///var/run/docker.sock
volumes:
- '${TMPDIR:-/tmp}/localstack:/var/lib/localstack'
- '/var/run/docker.sock:/var/run/docker.sock'
Here we've defined a service named localstack with a container named localstack. It uses the publicly available LocalStack image and exposes port 4566, which is the default port LocalStack runs on. Per their config suggestions, we set an environment variable that connects LocalStack to Docker and a couple of volumes, one of which is required for Docker connectivity while the other specifies where LocalStack should store its data.
2. Maintenant que LocalStack fonctionne avec votre application, nous pouvons créer des ressources AWS avec lesquelles votre application peut interagir.
Cette opération peut être effectuée manuellement à l'aide de la CLI de LocalStack :
awslocal s3api create-bucket --bucket my-test-bucket
awslocal s3api list-buckets
{
"Buckets": [
{
"Name": "my-test-bucket",
"CreationDate": "2022-12-02T21:53:24.000Z"
}
],
"Owner": {
"DisplayName": "webfile",
"ID": "bcaf1ffd86f41161ca5fb16fd081034f"
}
}
Pour plus d'informations sur les commandes, consultez le wiki AWS CLI v1 et la documentation LocalStack sur la couverture des fonctionnalités des services AWS. Au lieu d'utiliser aws, vous utilisez simplement awslocal.
Let's also make sure our application doesn't try to start up without LocalStack already running.
docker-compose.yml
api:
container_name: example-api
build:
context: .
dockerfile: ./dockerfiles/Dockerfile-api-dev
ports:
- '8080:8080'
depends_on:
localstack:
condition: service_started
postgres:
condition: service_healthy
postgres-init:
condition: service_completed_successfully
volumes:
- ./src:/home/node/app/src
environment:
- NODE_ENV=development
- POSTGRES_USER=test
- POSTGRES_PASSWORD=password
- POSTGRES_DATABASE_NAME=example
- POSTGRES_PORT=5432
- POSTGRES_HOST=postgres
- AWS_REGION=us-west-2
- AWS_ACCESS_KEY_ID=fake
- AWS_SECRET_ACCESS_KEY=fake
- SQS_NOTES_QUEUE_URL=http://localstack:4566/000000000000/notes-queue
command: ['npm', 'run', 'dev']
Restez informé grâce aux mises à jour hebdomadaires
Abonnez-vous à notre blog d'ingénierie pour recevoir régulièrement des informations sur les projets les plus intéressants sur lesquels notre équipe travaille.
Veuillez saisir une adresse électronique valide.
Merci de vous être abonné !
Configurer Terraform
While it's great to be able to create AWS resources on the fly for your application locally, you probably have some resources you want to start up every single time with your application. Terraform is a good tool to ensure a consistent and reproducible AWS infrastructure.
1. Pour commencer, définissez l'infrastructure dans Terraform.
We're going to define our infrastructure in a stock standard .tf file. The only difference is that we need to specify that the AWS endpoint we want to interact with is actually LocalStack.
Let's add a queue.
terraform/localstack.tf
provider "aws" {
region = "us-west-2"
access_key = "test"
secret_key = "test"
skip_credentials_validation = true
skip_requesting_account_id = true
skip_metadata_api_check = true
endpoints {
sqs = "http://localstack:4566"
}
}
resource "aws_sqs_queue" "queue" {
name = "notes-queue"
}
Here we've set up a very basic Terraform configuration for AWS resources. All the values in the provider section should stay as-is except for the region, which is up to you. Just remember that your application will need to use the same region. You can see we set up an SQS Queue called "notes-queue" and we've made sure to set the SQS endpoint to localstack.
2. En poursuivant le thème de l'automatisation via Docker Compose, nous pouvons maintenant utiliser Docker pour appliquer automatiquement notre configuration Terraform au démarrage.
Let's create a new Docker-based service+container in our docker-compose.yml file with a Dockerfile that installs Terraform and the AWS CLI, and then runs Terraform to create our resources. Yes, you heard that correctly. This container is going to run Docker itself (Docker-ception!). More on that in a second.
Tout d'abord, nous avons besoin de notre fichier Docker. Cela semble compliqué, mais il suffit de suivre ces étapes simples.
- Installer les prérequis nécessaires.
- Installer AWS CLI.
- Installer Terraform.
- Copiez notre script local, qui exécute Terraform, sur l'image du conteneur.
- Faire en sorte que l'image exécute notre script Terraform au démarrage du conteneur.
dockerfiles/Dockerfile-localstack-terraform-provision
FROM docker:20.10.10
RUN apk update && \
apk upgrade && \
apk add --no-cache bash wget unzip
# Install AWS CLI
RUN echo -e 'http://dl-cdn.alpinelinux.org/alpine/edge/main\nhttp://dl-cdn.alpinelinux.org/alpine/edge/community\nhttp://dl-cdn.alpinelinux.org/alpine/edge/testing' > /etc/apk/repositories && \
wget "s3.amazonaws.com/aws-cli/awscli-bundle.zip" -O "awscli-bundle.zip" && \
unzip awscli-bundle.zip && \
apk add --update groff less python3 curl && \
ln -s /usr/bin/python3 /usr/bin/python && \
rm /var/cache/apk/* && \
./awscli-bundle/install -i /usr/local/aws -b /usr/local/bin/aws && \
rm awscli-bundle.zip && \
rm -rf awscli-bundle
# Install Terraform
RUN wget https://releases.hashicorp.com/terraform/1.1.3/terraform_1.1.3_linux_amd64.zip \
&& unzip terraform_1.1.3_linux_amd64 \
&& mv terraform /usr/local/bin/terraform \
&& chmod +x /usr/local/bin/terraform
RUN mkdir -p /terraform
WORKDIR /terraform
COPY scripts/localstack-terraform-provision.sh /localstack-terraform-provision.sh
CMD ["/bin/bash", "/localstack-terraform-provision.sh"]
Nous devons maintenant configurer le service et le conteneur Docker Compose correspondants.
docker.compose.yml
localstack-terraform-provision:
build:
context: .
dockerfile: ./dockerfiles/Dockerfile-localstack-terraform-provision
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- ./terraform:/terraform
- ./scripts:/scripts
Cela pointe vers le fichier Docker que nous venons de créer et s'assure que le conteneur a accès à l'instance de Docker en cours d'exécution, ainsi qu'aux répertoires Terraform et scripts.
Ensuite, nous devons créer le script shell mentionné ci-dessus.
scripts/localstack-terraform-provision.sh
#!/bin/bash
(docker events --filter 'event=create' --filter 'event=start' --filter 'type=container' --filter 'container=localstack' --format '{{.Actor.Attributes.name}} {{.Status}}' &) | while read event_info
do
event_infos=($event_info)
container_name=${event_infos[0]}
event=${event_infos[1]}
echo "$container_name: status = ${event}"
if [[ $event == "start" ]]; then
sleep 10 # give localstack some time to start
terraform init
terraform apply --auto-approve
echo "The terraform configuration has been applied."
pkill -f "docker event.*"
fi
done
This script first runs a Docker CLI command that waits until it sees a Docker event, indicating that the LocalStack container has started up successfully. We do this so that we don't try to run Terraform without having LocalStack accessible. You can imagine how it might be hard to create an SQS queue if SQS for all intents and purposes didn't exist.
It may be a confusing move, but we're also going to make sure our localstack container waits for our localstack-terraform-provision container to start up. This way we guarantee that the localstack-terraform-provision container is up and watching for LocalStack to be up before LocalStack itself tries to start. If we don't do this, it's possible that our localstack-terraform-provision container would miss the start event from our localstack container.
docker.compose.yml
localstack:
container_name: 'localstack'
image: localstack/localstack
ports:
- '4566:4566'
environment:
- DOCKER_HOST=unix:///var/run/docker.sock
volumes:
- '${TMPDIR:-/tmp}/localstack:/var/lib/localstack'
- '/var/run/docker.sock:/var/run/docker.sock'
Depends_on:
# We wait for localstack-terraform-provision container to start
# so that it can watch for this localstack container to be ready
- localstack-terraform-provision
Finally, we make sure our application doesn't start until we've finished executing our Terraform.
docker-compose.yml
api:
container_name: example-api
build:
context: .
dockerfile: ./dockerfiles/Dockerfile-api-dev
ports:
- '8080:8080'
depends_on:
localstack:
condition: service_started
localstack-terraform-provision:
condition: service_completed_successfully
postgres:
condition: service_healthy
postgres-init:
condition: service_completed_successfully
volumes:
- ./src:/home/node/app/src
environment:
- NODE_ENV=development
- POSTGRES_USER=test
- POSTGRES_PASSWORD=password
- POSTGRES_DATABASE_NAME=example
- POSTGRES_PORT=5432
- POSTGRES_HOST=postgres
- AWS_REGION=us-west-2
- AWS_ACCESS_KEY_ID=fake
- AWS_SECRET_ACCESS_KEY=fake
- SQS_NOTES_QUEUE_URL=http://localstack:4566/000000000000/notes-queue
command: ['npm', 'run', 'dev']
Démarrer votre environnement de développement local
If you've followed along and have your application set up accordingly, or you're just playing around with our example project, you should be ready to start everything up and watch the magic!
Pour démarrer Docker Compose, il suffit d'exécuter docker-compose up.
You should see that all required images are downloaded, containers created and started, and everything running in the startup order we've defined via depends_on. Finally, you should see your application become available. In our case with the example project, this looks like
example-api | Running on http://0.0.0.0:8080
There will be a folder called db created with some files inside of it; this is essentially your running database. You'll also see some more files in your Terraform folder. These are the files Terraform uses to understand the state of your AWS resources.
We'll have a database running that is seeded with some data. In our case, we added a table called notes and a note. You can verify this locally by using a tool like psql to connect to your database and query it like this:
export PGPASSWORD=password; psql -U test -h localhost -d example
select * from notes;
id | contents | created_at | updated_at
--------------------------------------+---------------------------+----------------------------+----------------------------
6a71ff7e-577e-4991-bc70-4745b7fbbb78 | Look at this lovely note! | 2022-12-02 17:08:36.243954 | 2022-12-02 17:08:36.243954
Note that we're using a host of localhost and not postgres as we would use within our docker-compose environment.
Essayez maintenant d'appeler l'application.
curl -H "Content-Type: application/json" \
-d '{"contents":"This is my test note!"}' \
"http://127.0.0.1:8080/notes"
Si nous vérifions dans notre base de données, nous devrions voir que cette note aurait dû être créée.
id | contents | created_at | updated_at
--------------------------------------+---------------------------+----------------------------+----------------------------
6a71ff7e-577e-4991-bc70-4745b7fbbb78 | Look at this lovely note! | 2022-12-05 16:59:03.108637 | 2022-12-05 16:59:03.108637
a223103a-bb24-491b-b3c6-8690bc852ec9 | This is my test note! | 2022-12-05 17:26:33.845654 | 2022-12-05 17:26:33.845654
Nous pouvons également inspecter la file d'attente SQS pour voir si un message correspondant est en attente de traitement.
awslocal sqs receive-message --region us-west-2 --queue-url \
http://localstack:4566/000000000000/notes-queue
{
"Messages": [
{
"MessageId": "0917d626-a85b-4772-b6fe-49babddeca76",
"ReceiptHandle": "NjA5OWUwOTktODMxNC00YjhjLWJkM",
"MD5OfBody": "73757bf6dfcc3980d48acbbb7be3d780",
"Body": "{\"id\":\"a223103a-bb24-491b-b3c6-8690bc852ec9\",\"contents\":\"This is my test note!\"}"
}
]
}
Notez que l'identifiant par défaut du compte AWS de localstack est 000000000000.
Enfin, nous pouvons également appeler notre point de terminaison GET pour récupérer cette note.
curl -H "Content-Type: application/json" "http://127.0.0.1:8080/notes/a223103a-bb24-491b-b3c6-8690bc852ec9"
{
"id":"a223103a-bb24-491b-b3c6-8690bc852ec9",
"contents":"This is my test note!",
"createdAt":"2022-12-05T17:26:33.845Z",
"updatedAt":"2022-12-05T17:26:33.845Z"
}
Conclusion
Lorsque l'on développe un logiciel en nuage au sein d'une équipe, il n'est souvent ni pratique ni commode pour chaque personne de disposer d'un environnement en nuage dédié à des fins de développement local et de test. Les équipes devraient maintenir toute leur infrastructure en nuage personnelle en synchronisation avec l'infrastructure en nuage de production, ce qui faciliterait l'obsolescence et/ou la dérive des choses. Il n'est pas non plus pratique de partager le même environnement cloud dédié pour les tests de développement local, car les changements testés peuvent entrer en conflit et provoquer un comportement inattendu. Dans le même temps, vous souhaitez que l'environnement de développement local soit aussi proche que possible de la production. Le développement sur la production elle-même peut être lent, n'est pas toujours réalisable en raison des problèmes de sensibilité des données et peut également être difficile à mettre en place en toute sécurité. Ce sont là des exigences difficiles à concilier.
Ideally, if you've followed along with this guide, you'll now have an application with a local development environment that solves these requirements - no matter the backend application language or microservice framework! While this is mostly tailored to Postgres, it's possible to wire this up with any other database technology that can be run as a Docker container. We hope this guide helps you and your team members to iterate quickly and confidently on your product without stepping on each other's toes.