While most engineering tooling at DoorDash is focused on making safe incremental improvements to existing systems, in part by testing in production (learn more about our end-to-end testing strategy), this is not always the best approach when launching an entirely new business line. Building from scratch often requires faster prototyping and customer validation than incremental improvements to an existing system. In the New Verticals organization at DoorDash, we are launching and growing new categories such as alcohol and other regulated goods, health, retail, convenience, and grocery. Often we're going from zero to one. We needed to move quite fast during one recent expansion of our business, which required a local development experience that could keep up. In this article we will provide some context and then explain how we were able to speed up our development by enabling easy local development with PostgreSQL
Desviarse del entorno de desarrollo típico de DoorDash
Idealmente, la infraestructura y los requisitos ya están en su lugar cuando desarrollamos un microservicio backend, que suele ser el caso de las nuevas aplicaciones en DoorDash. Los requisitos concretos y la infraestructura existente agilizan el camino para que los entornos de desarrollo se integren de forma fácil y segura, eliminando parte de la necesidad de iteración rápida porque el diseño de la aplicación puede cargarse por adelantado basándose en los requisitos. Esta estabilidad existente ayuda a evitar comportamientos inesperados dentro de la aplicación, garantizando que los despliegues sean una operación segura.
However, this entirely new microservice could not be built on any existing infrastructure for compliance reasons. Instead, we had to develop our application in parallel with infrastructure planning and spin-up. As backend developers, we needed to stay unblocked while the infrastructure - in this case AWS resources - was being created. Our backend had to be developer-friendly and allow the team to iterate rapidly to deal with evolving requirements and work independently on separate tasks without the testing interrupting anyone's work. With the amorphous nature of the task, the typical DoorDash local development environment approach was not suitable.
Crear un nuevo enfoque de desarrollo local
To kick off the creation of a local dev environment, we first had to take stock of our desired infrastructure as well as our available tooling and resources before charting out how to set up the environment quickly and efficiently. We knew we'd be deploying a Docker container to Fargate as well as using an Amazon Aurora PostgreSQL database and Terraform to model our infrastructure as code. It was fair to assume that we would use other AWS services, particularly SQS and AWS Secrets Manager.
One local development approach would have been to mock, or create dummy versions of our cloud resources. Local mocks may work well under some circumstances, but it's difficult to be fully confident in the final end-to-end experience of an application because the mocks may be incorrect, lack important features, or ultimately have unanticipated behaviors.
Teniendo en cuenta estas consideraciones, desarrollamos una estrategia para diseñar nuestro entorno de desarrollo local que maximizara las compensaciones entre la velocidad de desarrollo, la facilidad de uso y la similitud con la producción. Dividimos la estrategia en cuatro pasos:
- Utilizar Docker Compose para nuestra aplicación Docker y todos sus recursos necesarios.
- Configure una base de datos PostgreSQL en contenedor que se ejecute localmente.
- Utilice LocalStack para habilitar los recursos de AWS que se ejecutan localmente.
- Utilice Terraform para crear recursos de AWS coherentes en LocalStack.
Comprender las compensaciones
Nuestro planteamiento de desarrollo local conllevaba una serie de ventajas e inconvenientes:
Pros:
- Rápido y fácil para cualquier persona nueva en el proyecto para obtener el entorno de desarrollo local en funcionamiento por sí mismos.
- Entornos de desarrollo locales coherentes entre máquinas y entre inicios de entorno.
- Sin posibilidad de tocar accidentalmente datos o sistemas de producción.
- Facilidad para iterar sobre la infraestructura deseada y añadir nuevas capacidades de aplicación.
- No requiere infraestructura en la nube.
- No hay largas esperas durante el arranque. Las ejecuciones iniciales requieren un tiempo adicional para descargar las imágenes Docker, pero cada arranque posterior debería ser rápido.
- Todo dominado en código, con la aplicación y su entorno completo mapeados a través de Docker Compose y Terraform.
- Marco de microservicios de backend y lenguaje agnóstico.
Contras:
- Because it's not actually running in production, the final result may not be an entirely accurate reflection of how the microservice ultimately will perform in production.
- Al no haber interacción con ningún recurso o dato de producción, puede resultar difícil crear los datos ficticios necesarios para reflejar con precisión todos los escenarios de prueba.
- Añadir microservicios propios adicionales que tengan sus propias dependencias puede no ser sencillo y volverse difícil de manejar.
- As the application and infrastructure grows, running everything locally may become a resource drain on an engineer's machine.
- Algunos servicios de LocalStack AWS no tienen paridad de características 1:1 con AWS. Además, algunos requieren una suscripción de pago.
The bottom line is that this local development approach lets new developers get started faster, keeps the environment consistent, avoids production mishaps, is easy to iterate on, and can be tracked via Git. On the other hand, generating dummy data can be difficult and as the application's microservice dependency graph grows, individual local machines may be hard-pressed to run everything locally.
A continuación detallamos cada elemento de nuestro enfoque. Si utiliza su propia aplicación, sustituya los detalles de implementación específicos por los relacionados con nuestra aplicación, incluidos elementos como Node.js, Typescript, servicios de AWS y nombres de variables de entorno.
Clonar el proyecto de ejemplo
Let's get started by checking out our example project from GitHub. This project has been set up according to the instructions detailed in the rest of this post.
Proyecto de ejemplo: doordash-oss/local-dev-env-blog-example
In this example, our backend application has been built using TypeScript, Node.js, Express, and TypeORM. You're not required to use any of these technologies for your own application, of course, and we won't focus on any specifics related to them.
This example project is based on an application that exposes two REST endpoints - one for creating a note and another for retrieving one.
POST /notes
GET /notes/:noteid
Al publicar una nota, también enviamos un mensaje a una cola SQS. Actualmente, no se hace nada con estos mensajes en la cola, pero en el futuro podríamos cablear un consumidor de la cola para procesar posteriormente las notas de forma asíncrona.
Instale los paquetes de requisitos previos para que el proyecto de ejemplo se inicie. Tenga en cuenta que estas instrucciones también se encuentran en el README del proyecto.
- Node version >= 16.13 but <17 installed.
- https://nodejs.org/en/download/
- Docker Desktop installed and running.
- https://www.docker.com/products/docker-desktop/
- postgresql installed.
- `brew install postgresql`
- awslocal installed.
- https://docs.localstack.cloud/integrations/aws-cli/#localstack-aws-cli-awslocal
- Run npm install
- `npm install`
Configure Docker Compose con su aplicación
Docker Compose es una herramienta para definir y ejecutar entornos Docker multicontenedor. En este caso, ejecutaremos nuestra aplicación como un contenedor y utilizaremos algunos otros para simular nuestro entorno de producción con la mayor precisión posible.
1. Comienza configurando la aplicación para que se ejecute a través de Docker Compose. En primer lugar, crea un Dockerfile, que describe cómo debe construirse el contenedor.
dockerfiles/Dockerfile-api-dev
FROM public.ecr.aws/docker/library/node:lts-slim
# Create app directory
WORKDIR /home/node/app
# Install app dependencies
# A wildcard is used to ensure both package.json AND package-lock.json are copied
COPY package*.json ./
COPY tsconfig.json ./
COPY src ./src
RUN npm install --frozen-lockfile
# We have to install these dev dependecies as regular dependencies to get hot swapping to work
RUN npm install nodemon ts-node @types/pg
# Bundle app source
COPY . .
Este Dockerfile contiene pasos específicos para Node.js. A menos que también esté utilizando Node.js y TypeORM, el suyo tendrá un aspecto diferente. Para obtener más información sobre la especificación Dockerfile, puede consultar la documentación de Docker aquí.
2. A continuación, cree un archivo docker-compose.yml y defina el contenedor de la aplicación.
docker-compose.yml
version: '3.8'
services:
api:
container_name: example-api
build:
context: .
dockerfile: ./dockerfiles/Dockerfile-api-dev
ports:
- '8080:8080'
volumes:
- ./src:/home/node/app/src
environment:
- NODE_ENV=development
command: ['npm', 'run', 'dev']
Here we have defined a service called api that will spin up a container named example-api that uses the Dockerfile we previously defined as the image. It exposes port 8080, which is the port our Express server starts on, and mounts the ./src directory to the directory /home/node/app/src. We're also setting the NODE_ENV environment variable to development and starting the application with the command npm run dev. You can see what npm run dev does specifically by checking out that script in package.json here. In this case, we're using a package called nodemon which will auto-restart our backend Node.js express application whenever we make a change to any TypeScript file in our src directory, a process that is called hotswapping. This isn't necessary for your application, but it definitely speeds up the development process.
Crear una base de datos local
Most backend microservices wouldn't be complete without a database layer for persisting data. This next section will walk you through adding a PostgreSQL database locally. While we use PostgreSQL here, many other databases have Docker images available, such as CockroachDB or MySQL.
1. First, we'll set up a PostgreSQL database to be run and connected to locally via Docker Compose.
Añade un nuevo servicio PostgreSQL al archivo docker-compose.yml.
docker-compose.yml
postgres:
container_name: 'postgres'
image: public.ecr.aws/docker/library/postgres:14.3-alpine
environment:
- POSTGRES_USER=test
- POSTGRES_PASSWORD=password
- POSTGRES_DB=example
ports:
- '5432:5432'
volumes:
- ./db:/var/lib/postgresql/data
healthcheck:
test: ['CMD-SHELL', 'pg_isready -U test -d example']
interval: 5s
timeout: 5s
retries: 5
Here we have defined a service and container called postgres. It uses the public PostgreSQL 14.3 image because we don't need any customization. We've specified a few environment variables, namely the user and password needed to connect to the database and the name of the database. We're exposing the default PostgreSQL port 5432 locally and using a local folder named db for the underlying database data. We've also defined a health check that checks that the example database is up and accessible.
Ahora podemos conectar nuestra aplicación a ella añadiendo variables de entorno relevantes que coincidan con nuestras credenciales de base de datos configuradas.
docker-compose.yml
api:
container_name: example-api
build:
context: .
dockerfile: ./dockerfiles/Dockerfile-api-dev
ports:
- '8080:8080'
depends_on:
postgres:
condition: service_healthy
volumes:
- ./src:/home/node/app/src
environment:
- NODE_ENV=development
- POSTGRES_USER=test
- POSTGRES_PASSWORD=password
- POSTGRES_DATABASE_NAME=example
- POSTGRES_PORT=5432
- POSTGRES_HOST=postgres
command: ['npm', 'run', 'dev']
One interesting thing to note about connections between containers in a Docker Compose environment is that the hostname you use to connect to another container is the container's name. In this case, because we want to connect to the postgres container, we set the host environment variable to be postgres. We've also specified a depends_on section which tells the example-api container to wait to start up until the health check for our postgres container returns successfully. This way our application won't try to connect to the database before it is up and running.
2. Now we'll seed the database with some data whenever it starts up.
If you're testing your application in any way, it's probably useful to have a local database that always has some data. To ensure a consistent local development experience across docker-compose runs and across different developers, we can add a Docker container which runs arbitrary SQL when docker-compose starts.
Para ello, empezamos definiendo un script bash y un archivo SQL como se muestra a continuación.
scripts/postgres-seed.sql
-- Add any commands you want to run on DB startup here.
CREATE EXTENSION IF NOT EXISTS "uuid-ossp";
CREATE TABLE IF NOT EXISTS notes (
id UUID NOT NULL DEFAULT uuid_generate_v4(),
contents varchar(450) NOT NULL,
created_at TIMESTAMP WITHOUT TIME ZONE DEFAULT now(),
updated_at TIMESTAMP WITHOUT TIME ZONE DEFAULT now()
);
-- Since data is kept between container restarts, you probably want to delete old inserted data so that you have a known state everytime the the database starts up
DELETE FROM notes;
INSERT INTO notes (id, contents) VALUES ('6a71ff7e-577e-4991-bc70-4745b7fbbb78', 'Look at this lovely note!');
This is just a simple SQL file that creates a database table called "notes" and inserts a note into it. Note the use of IF NOT EXISTS and the DELETE, which ensure that this script will always execute successfully, whether it's run after the database is first created or multiple times after.
scripts/local-postgres-init.sh
#!/bin/bash
export PGPASSWORD=password; psql -U test -h postgres -d example -f /scripts/postgres-seed.sql
Este archivo bash ejecuta nuestro script postgres-seed.sql contra nuestra base de datos.
A continuación, define el servicio Docker y el contenedor en docker-compose para ejecutar el script y el SQL.
docker-compose.yml
postgres-init:
container_name: postgres-init
image: public.ecr.aws/docker/library/postgres:14.3-alpine
volumes:
- './scripts:/scripts'
entrypoint: '/bin/bash'
command: ['/scripts/local-postgres-init.sh']
depends_on:
postgres:
condition: service_healthy
Esto inicia un contenedor con el nombre postgres-init que ejecuta nuestro script bash de arriba. Al igual que nuestra aplicación, espera a arrancar hasta que nuestro contenedor de base de datos esté en funcionamiento.
Speaking of our application, let's also make sure that it waits for our database to be seeded.
docker-compose.yml
api:
container_name: example-api
build:
context: .
dockerfile: ./dockerfiles/Dockerfile-api-dev
ports:
- '8080:8080'
depends_on:
postgres:
condition: service_healthy
postgres-init:
condition: service_completed_successfully
volumes:
- ./src:/home/node/app/src
environment:
- NODE_ENV=development
- POSTGRES_USER=test
- POSTGRES_PASSWORD=password
- POSTGRES_DATABASE_NAME=example
- POSTGRES_PORT=5432
- POSTGRES_HOST=postgres
command: ['npm', 'run', 'dev']
Configurar LocalStack
If you're taking full advantage of AWS, your local development environment likely wouldn't be complete without access to the AWS services you rely on - or at least mocks of them. LocalStack lets you run many of your AWS resources locally alongside your application, ensuring test data is always separated from the rest of your team while maintaining an application environment that's as close to prod as possible.
1. En primer lugar, configura LocalStack para que se ejecute con Docker Compose.
Just like our database or application, we define a LocalStack service and container in our docker-compose.yml file. The configuration we're using is based on the recommended configuration from LocalStack.
docker-compose.yml
localstack:
container_name: 'localstack'
image: localstack/localstack
ports:
- '4566:4566'
environment:
- DOCKER_HOST=unix:///var/run/docker.sock
volumes:
- '${TMPDIR:-/tmp}/localstack:/var/lib/localstack'
- '/var/run/docker.sock:/var/run/docker.sock'
Here we've defined a service named localstack with a container named localstack. It uses the publicly available LocalStack image and exposes port 4566, which is the default port LocalStack runs on. Per their config suggestions, we set an environment variable that connects LocalStack to Docker and a couple of volumes, one of which is required for Docker connectivity while the other specifies where LocalStack should store its data.
2. Ahora que tienes LocalStack ejecutándose junto a tu aplicación, podemos crear algunos recursos de AWS con los que tu aplicación pueda interactuar.
Esto se puede hacer manualmente utilizando la CLI de LocalStack:
awslocal s3api create-bucket --bucket my-test-bucket
awslocal s3api list-buckets
{
"Buckets": [
{
"Name": "my-test-bucket",
"CreationDate": "2022-12-02T21:53:24.000Z"
}
],
"Owner": {
"DisplayName": "webfile",
"ID": "bcaf1ffd86f41161ca5fb16fd081034f"
}
}
Para obtener más información sobre los comandos, consulte la wiki de AWS CLI v1 y los documentos de LocalStack sobre la cobertura de características del servicio de AWS. En lugar de utilizar aws, sólo tiene que utilizar awslocal.
Let's also make sure our application doesn't try to start up without LocalStack already running.
docker-compose.yml
api:
container_name: example-api
build:
context: .
dockerfile: ./dockerfiles/Dockerfile-api-dev
ports:
- '8080:8080'
depends_on:
localstack:
condition: service_started
postgres:
condition: service_healthy
postgres-init:
condition: service_completed_successfully
volumes:
- ./src:/home/node/app/src
environment:
- NODE_ENV=development
- POSTGRES_USER=test
- POSTGRES_PASSWORD=password
- POSTGRES_DATABASE_NAME=example
- POSTGRES_PORT=5432
- POSTGRES_HOST=postgres
- AWS_REGION=us-west-2
- AWS_ACCESS_KEY_ID=fake
- AWS_SECRET_ACCESS_KEY=fake
- SQS_NOTES_QUEUE_URL=http://localstack:4566/000000000000/notes-queue
command: ['npm', 'run', 'dev']
Manténgase informado con las actualizaciones semanales
Suscríbase a nuestro blog de ingeniería para recibir actualizaciones periódicas sobre los proyectos más interesantes en los que trabaja nuestro equipo.
Introduzca una dirección de correo electrónico válida.
Gracias por suscribirse.
Configurar Terraform
While it's great to be able to create AWS resources on the fly for your application locally, you probably have some resources you want to start up every single time with your application. Terraform is a good tool to ensure a consistent and reproducible AWS infrastructure.
1. Para empezar, defina la infraestructura en Terraform.
We're going to define our infrastructure in a stock standard .tf file. The only difference is that we need to specify that the AWS endpoint we want to interact with is actually LocalStack.
Let's add a queue.
terraform/localstack.tf
provider "aws" {
region = "us-west-2"
access_key = "test"
secret_key = "test"
skip_credentials_validation = true
skip_requesting_account_id = true
skip_metadata_api_check = true
endpoints {
sqs = "http://localstack:4566"
}
}
resource "aws_sqs_queue" "queue" {
name = "notes-queue"
}
Here we've set up a very basic Terraform configuration for AWS resources. All the values in the provider section should stay as-is except for the region, which is up to you. Just remember that your application will need to use the same region. You can see we set up an SQS Queue called "notes-queue" and we've made sure to set the SQS endpoint to localstack.
2. Continuando con el tema de la automatización a través de Docker Compose, ahora podemos utilizar Docker para aplicar automáticamente nuestra configuración Terraform en el arranque.
Let's create a new Docker-based service+container in our docker-compose.yml file with a Dockerfile that installs Terraform and the AWS CLI, and then runs Terraform to create our resources. Yes, you heard that correctly. This container is going to run Docker itself (Docker-ception!). More on that in a second.
En primer lugar, necesitamos nuestro Dockerfile. Parece complicado, pero solo implica estos sencillos pasos.
- Instale los requisitos previos necesarios.
- Instalar AWS CLI.
- Instale Terraform.
- Copia nuestro script local, que ejecuta Terraform, en la imagen del contenedor.
- Haga que la imagen ejecute nuestro script Terraform cuando se inicie el contenedor.
dockerfiles/Dockerfile-localstack-terraform-provision
FROM docker:20.10.10
RUN apk update && \
apk upgrade && \
apk add --no-cache bash wget unzip
# Install AWS CLI
RUN echo -e 'http://dl-cdn.alpinelinux.org/alpine/edge/main\nhttp://dl-cdn.alpinelinux.org/alpine/edge/community\nhttp://dl-cdn.alpinelinux.org/alpine/edge/testing' > /etc/apk/repositories && \
wget "s3.amazonaws.com/aws-cli/awscli-bundle.zip" -O "awscli-bundle.zip" && \
unzip awscli-bundle.zip && \
apk add --update groff less python3 curl && \
ln -s /usr/bin/python3 /usr/bin/python && \
rm /var/cache/apk/* && \
./awscli-bundle/install -i /usr/local/aws -b /usr/local/bin/aws && \
rm awscli-bundle.zip && \
rm -rf awscli-bundle
# Install Terraform
RUN wget https://releases.hashicorp.com/terraform/1.1.3/terraform_1.1.3_linux_amd64.zip \
&& unzip terraform_1.1.3_linux_amd64 \
&& mv terraform /usr/local/bin/terraform \
&& chmod +x /usr/local/bin/terraform
RUN mkdir -p /terraform
WORKDIR /terraform
COPY scripts/localstack-terraform-provision.sh /localstack-terraform-provision.sh
CMD ["/bin/bash", "/localstack-terraform-provision.sh"]
Ahora tenemos que configurar el servicio Docker Compose y el contenedor correspondientes.
docker.compose.yml
localstack-terraform-provision:
build:
context: .
dockerfile: ./dockerfiles/Dockerfile-localstack-terraform-provision
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- ./terraform:/terraform
- ./scripts:/scripts
Esto apunta a ese Dockerfile que acabamos de crear y se asegura de que el contenedor tiene acceso a la instancia en ejecución de Docker, así como a los directorios Terraform y scripts.
A continuación, tenemos que crear el script de shell antes mencionado.
scripts/localstack-terraform-provision.sh
#!/bin/bash
(docker events --filter 'event=create' --filter 'event=start' --filter 'type=container' --filter 'container=localstack' --format '{{.Actor.Attributes.name}} {{.Status}}' &) | while read event_info
do
event_infos=($event_info)
container_name=${event_infos[0]}
event=${event_infos[1]}
echo "$container_name: status = ${event}"
if [[ $event == "start" ]]; then
sleep 10 # give localstack some time to start
terraform init
terraform apply --auto-approve
echo "The terraform configuration has been applied."
pkill -f "docker event.*"
fi
done
This script first runs a Docker CLI command that waits until it sees a Docker event, indicating that the LocalStack container has started up successfully. We do this so that we don't try to run Terraform without having LocalStack accessible. You can imagine how it might be hard to create an SQS queue if SQS for all intents and purposes didn't exist.
It may be a confusing move, but we're also going to make sure our localstack container waits for our localstack-terraform-provision container to start up. This way we guarantee that the localstack-terraform-provision container is up and watching for LocalStack to be up before LocalStack itself tries to start. If we don't do this, it's possible that our localstack-terraform-provision container would miss the start event from our localstack container.
docker.compose.yml
localstack:
container_name: 'localstack'
image: localstack/localstack
ports:
- '4566:4566'
environment:
- DOCKER_HOST=unix:///var/run/docker.sock
volumes:
- '${TMPDIR:-/tmp}/localstack:/var/lib/localstack'
- '/var/run/docker.sock:/var/run/docker.sock'
Depends_on:
# We wait for localstack-terraform-provision container to start
# so that it can watch for this localstack container to be ready
- localstack-terraform-provision
Finally, we make sure our application doesn't start until we've finished executing our Terraform.
docker-compose.yml
api:
container_name: example-api
build:
context: .
dockerfile: ./dockerfiles/Dockerfile-api-dev
ports:
- '8080:8080'
depends_on:
localstack:
condition: service_started
localstack-terraform-provision:
condition: service_completed_successfully
postgres:
condition: service_healthy
postgres-init:
condition: service_completed_successfully
volumes:
- ./src:/home/node/app/src
environment:
- NODE_ENV=development
- POSTGRES_USER=test
- POSTGRES_PASSWORD=password
- POSTGRES_DATABASE_NAME=example
- POSTGRES_PORT=5432
- POSTGRES_HOST=postgres
- AWS_REGION=us-west-2
- AWS_ACCESS_KEY_ID=fake
- AWS_SECRET_ACCESS_KEY=fake
- SQS_NOTES_QUEUE_URL=http://localstack:4566/000000000000/notes-queue
command: ['npm', 'run', 'dev']
Puesta en marcha del entorno de desarrollo local
If you've followed along and have your application set up accordingly, or you're just playing around with our example project, you should be ready to start everything up and watch the magic!
Para iniciar Docker Compose, basta con ejecutar docker-compose up.
You should see that all required images are downloaded, containers created and started, and everything running in the startup order we've defined via depends_on. Finally, you should see your application become available. In our case with the example project, this looks like
example-api | Running on http://0.0.0.0:8080
There will be a folder called db created with some files inside of it; this is essentially your running database. You'll also see some more files in your Terraform folder. These are the files Terraform uses to understand the state of your AWS resources.
We'll have a database running that is seeded with some data. In our case, we added a table called notes and a note. You can verify this locally by using a tool like psql to connect to your database and query it like this:
export PGPASSWORD=password; psql -U test -h localhost -d example
select * from notes;
id | contents | created_at | updated_at
--------------------------------------+---------------------------+----------------------------+----------------------------
6a71ff7e-577e-4991-bc70-4745b7fbbb78 | Look at this lovely note! | 2022-12-02 17:08:36.243954 | 2022-12-02 17:08:36.243954
Note that we're using a host of localhost and not postgres as we would use within our docker-compose environment.
Ahora intenta llamar a la aplicación.
curl -H "Content-Type: application/json" \
-d '{"contents":"This is my test note!"}' \
"http://127.0.0.1:8080/notes"
Si volvemos a comprobarlo en nuestra base de datos, deberíamos ver que esa nota debería haberse creado.
id | contents | created_at | updated_at
--------------------------------------+---------------------------+----------------------------+----------------------------
6a71ff7e-577e-4991-bc70-4745b7fbbb78 | Look at this lovely note! | 2022-12-05 16:59:03.108637 | 2022-12-05 16:59:03.108637
a223103a-bb24-491b-b3c6-8690bc852ec9 | This is my test note! | 2022-12-05 17:26:33.845654 | 2022-12-05 17:26:33.845654
También podemos inspeccionar la cola SQS para ver que hay un mensaje correspondiente esperando a ser procesado.
awslocal sqs receive-message --region us-west-2 --queue-url \
http://localstack:4566/000000000000/notes-queue
{
"Messages": [
{
"MessageId": "0917d626-a85b-4772-b6fe-49babddeca76",
"ReceiptHandle": "NjA5OWUwOTktODMxNC00YjhjLWJkM",
"MD5OfBody": "73757bf6dfcc3980d48acbbb7be3d780",
"Body": "{\"id\":\"a223103a-bb24-491b-b3c6-8690bc852ec9\",\"contents\":\"This is my test note!\"}"
}
]
}
Tenga en cuenta que el id de cuenta de AWS por defecto de localstack es 000000000000.
Por último, también podemos llamar a nuestro punto final GET para recuperar esta nota.
curl -H "Content-Type: application/json" "http://127.0.0.1:8080/notes/a223103a-bb24-491b-b3c6-8690bc852ec9"
{
"id":"a223103a-bb24-491b-b3c6-8690bc852ec9",
"contents":"This is my test note!",
"createdAt":"2022-12-05T17:26:33.845Z",
"updatedAt":"2022-12-05T17:26:33.845Z"
}
Conclusión
Cuando se desarrolla software en la nube como parte de un equipo, a menudo no es práctico o conveniente que cada persona tenga un entorno dedicado en la nube para fines de pruebas de desarrollo local. Los equipos tendrían que mantener toda su infraestructura personal en la nube sincronizada con la infraestructura en la nube de producción, lo que facilitaría que las cosas se quedaran obsoletas o se desviaran. Tampoco es práctico compartir el mismo entorno de nube dedicado para las pruebas de desarrollo local porque los cambios que se están probando pueden entrar en conflicto y causar un comportamiento inesperado. Al mismo tiempo, se desea que el entorno de desarrollo local sea lo más parecido posible al de producción. Desarrollar en la propia producción puede ser lento, no siempre es factible debido a posibles problemas de sensibilidad de los datos, y también puede ser difícil de configurar de una manera segura. Son requisitos difíciles de combinar.
Ideally, if you've followed along with this guide, you'll now have an application with a local development environment that solves these requirements - no matter the backend application language or microservice framework! While this is mostly tailored to Postgres, it's possible to wire this up with any other database technology that can be run as a Docker container. We hope this guide helps you and your team members to iterate quickly and confidently on your product without stepping on each other's toes.