Self-hosting S3
What this tutorial will cover?
This tutorial will show how to set up minio to work with capgo.
This is not technically required to run capgo.
Setting S3 allows for uploading bundles from the CLI.
Requirements
Getting started
First, create a new directory.
Then create a folder named data
inside.
Then run the following command:
docker run \
-p 9000:9000 \
-p 9090:9090 \
--user $(id -u):$(id -g) \
--name minio1 \
-e "MINIO_ROOT_USER=ROOTUSER" \
-e "MINIO_ROOT_PASSWORD=CHANGEME123" \
-v PATH_TO_DATA_FOLDER_CREATED_IN_PREVIOUS_STEP:/data \
quay.io/minio/minio server /data --console-address ":9090"
If you ever close the console window with this container you can start it with:
docker start minio1
If you ever need to change the configuration of minio you can remove the container by running:
docker rm minio1
⚠️ This command does not remove minio data
Setting up edge functions
Now that we have a S3 server running we need to set up capgo edge functions to use our S3 server.
To do that we need to create an ENV file in capgo/supabase
named .env.local
This file should look like this:
STRIPE_WEBHOOK_SECRET=test
STRIPE_SECRET_KEY=test
API_SECRET=testsecret
PLAN_MAKER=test
PLAN_SOLO=test
PLAN_TEAM=test
# Below is the accually important setup for S3
S3_ENDPOINT=172.17.0.1
S3_REGION=dev-region
S3_PORT=9000
S3_SSL=false
R2_ACCESS_KEY_ID=ROOTUSER
R2_SECRET_ACCESS_KEY=CHANGEME123
The ip 172.17.0.1
is a docker ip that can be both reached by our local machine and the docker edge functions container
To run edge functions with our new env file use
supabase functions serve --env-file ./supabase/.env.local
Setting up CLI to use S3
The CLI will not work by default with minio. The following change to capacitor.config.ts
1 is required.
const config: CapacitorConfig = {
appId: 'com.demo.app',
appName: 'demoApp',
webDir: 'dist',
bundledWebRuntime: false,
plugins: {
CapacitorUpdater : {
// Without this localS3 the upload command will fail
localS3: true
},
},
};
Footnotes
-
File is located in your app directory ↩