This repository is split to on-chain and off-chain oracle implementation.
You can learn more about the Orakl Network from documentation.
Run local data feed connected to testnet.
- Deploy contracts in testnet(baobab)
- Run postgres & redis
- Run orakl-api & orakl-delegator
- Insert deployed data feed & set delegator fee payer
- Run listener, worker, reporter, and fetcher
- Activate inserted data feed
- Docker
brew install docker
brew install docker-compose
- Env setup
Nearly everything is already setup, but there are two variables that should be set manually in following env files:
dockerfiles/local-data-feed/envs/.contracts.env
MNEMONIC="{MNEMONIC for contract deployer wallet}"
dockerfiles/local-data-feed/envs/.cli.env
DELEGATOR_REPORTER_PK={private key for delegator fee payer}
- Docker Compose Build Builds all required images for docker-compose.
docker-compose -f docker-compose.local-data-feed.yaml build
- Docker Compose Up Runs all required images to run datafeed locally.
docker-compose -f docker-compose.local-data-feed.yaml up
- Docker Compose Down Close all related containers.
docker-compose -f docker-compose.local-data-feed.yaml down -v
- Docker Compose Build Builds all required images for docker-compose.
docker-compose -f docker-compose.local-core.yaml build
- Docker Compose Up Runs all required images to run datafeed locally.
SERVICE=rr docker-compose -f docker-compose.local-core.yaml up --force-recreate
- Docker Compose Down Close all related containers.
docker-compose -f docker-compose.local-core.yaml down -v
Replace SERVICE
with whichever service you'd like to run. The options are vrf
and rr
which represent VRF and Request-Response services respectively.
Here is what happens after the above command is run:
api
,postgres
,redis
, andjson-rpc
services will start as separate docker containerspostgres
will get populated with necessary data:- chains
- services
- vrf keys in case if the service is vrf
- listener (after contracts are deployed)
- reporter (after contracts are deployed)
- migration files in
contracts/v0.1/migration/
get updated with provided keys and other values - relevant coordinator and prepayment contracts get deployed
Keep in mind that you'll need the keyHash value for VRF consumer and update it in vrf-consumer/scripts/utils.ts
You can spin up the listener, worker, and reporter services from core and make requests to VRF or Request-Response consumers after deploying consumer contracts.
- The current automation is not designed to run both VRF and Request-Response services.
- Therefore, every time a new service (VRF or Request-Response) is started, all the running containers related to
core
will be recreated, meaning you'll lose all changes in those containers