Skip to content

Commit

Permalink
Fix 'getting started' links in examples folder (#979)
Browse files Browse the repository at this point in the history
Summary:
Pull Request resolved: #979

Some of them are referring to the old deprecated doc.
Some of them are referring to the md file by doing '../../'. Let's use the website link directly.

Reviewed By: shoumikhin, dbort

Differential Revision: D50309082

fbshipit-source-id: ecac525db2e14f21f6ab1c964ea5fce205b0b5f3
  • Loading branch information
mergennachin committed Oct 17, 2023
1 parent 9e84584 commit 51029c6
Show file tree
Hide file tree
Showing 12 changed files with 20 additions and 13 deletions.
2 changes: 1 addition & 1 deletion backends/apple/mps/setup.md
Original file line number Diff line number Diff line change
Expand Up @@ -60,7 +60,7 @@ In order to be able to successfully build and run a model using the MPS backend

## Setting up Developer Environment

***Step 1.*** Please finish tutorial [Setting up executorch](getting-started-setup.md).
***Step 1.*** Please finish tutorial [Setting up ExecuTorch](https://pytorch.org/executorch/stable/getting-started-setup).

***Step 2.*** Install dependencies needed to lower MPS delegate:

Expand Down
2 changes: 1 addition & 1 deletion examples/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -70,4 +70,4 @@ You will find demos of [ExecuTorch SDK](./sdk/) in the [`sdk/`](./sdk/) director

## Dependencies

Various models and workflows listed in this directory have dependencies on some other packages. You need to follow the setup guide in [Setting up ExecuTorch from GitHub](../docs/source/getting-started-setup.md) to have appropriate packages installed.
Various models and workflows listed in this directory have dependencies on some other packages. You need to follow the setup guide in [Setting up ExecuTorch from GitHub](https://pytorch.org/executorch/stable/getting-started-setup) to have appropriate packages installed.
2 changes: 1 addition & 1 deletion examples/apple/coreml/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ coreml

We will walk through an example model to generate a **CoreML** delegated binary file from a python `torch.nn.module` then we will use the `coreml/executor_runner` to run the exported binary file.

1. Following the setup guide in [Setting Up ExecuTorch](/docs/source/getting-started-setup.md)
1. Following the setup guide in [Setting Up ExecuTorch](https://pytorch.org/executorch/stable/getting-started-setup)
you should be able to get the basic development environment for ExecuTorch working.

2. Run `install_requirements.sh` to install dependencies required by the **CoreML** backend.
Expand Down
2 changes: 1 addition & 1 deletion examples/apple/mps/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ This README gives some examples on backend-specific model workflow.
## Prerequisite

Please finish the following tutorials:
- [Setting up executorch](../../../docs/website/docs/tutorials/00_setting_up_executorch.md).
- [Setting up ExecuTorch](https://pytorch.org/executorch/stable/getting-started-setup).
- [Setting up MPS backend](../../../backends/apple/mps/setup.md).

## Delegation to MPS backend
Expand Down
2 changes: 1 addition & 1 deletion examples/demo-apps/android/ExecuTorchDemo/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ This guide explains how to setup ExecuTorch for Android using a demo app. The ap

:::{grid-item-card} Prerequisites
:class-card: card-prerequisites
* Refer to [Setting up ExecuTorch](getting-started-setup.md) to set up the repo and dev environment.
* Refer to [Setting up ExecuTorch](https://pytorch.org/executorch/stable/getting-started-setup) to set up the repo and dev environment.
* Download and install [Android Studio and SDK](https://developer.android.com/studio).
* Supported Host OS: CentOS, macOS Ventura (M1/x86_64). See below for Qualcomm HTP specific requirements.
* *Qualcomm HTP Only[^1]:* To build and run on Qualcomm's AI Engine Direct, please follow [Building and Running ExecuTorch with Qualcomm AI Engine Direct Backend](build-run-qualcomm-ai-engine-direct-backend.md) for hardware and software pre-requisites.
Expand Down
11 changes: 9 additions & 2 deletions examples/demo-apps/apple_ios/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -40,8 +40,15 @@ pip --version

### 3. Getting Started Tutorial

Before proceeding, follow the [Setting Up ExecuTorch](getting-started-setup.md)
tutorial to configure the basic environment.
Before proceeding, follow the [Setting Up ExecuTorch](https://pytorch.org/executorch/stable/getting-started-setup)
tutorial to configure the basic environment. Feel free to skip building anything
just yet. Make sure you have all the required dependencies installed, including
the following tools:

- Buck2 (as `/tmp/buck2`)
- Cmake (`cmake` reachable at `$PATH`)
- FlatBuffers Compiler (`flatc` reachable at `$PATH` or as `$FLATC_EXECUTABLE`
enironment variable)

### 4. Backend Dependencies

Expand Down
2 changes: 1 addition & 1 deletion examples/models/llama2/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,7 @@ This example tries to reuse the Python code, with modifications to make it compa


# Instructions:
1. Follow the [tutorial](https://github.com/pytorch/executorch/blob/main/docs/website/docs/tutorials/00_setting_up_executorch.md) to set up ExecuTorch
1. Follow the [tutorial](https://pytorch.org/executorch/stable/getting-started-setup) to set up ExecuTorch
2. `cd examples/third-party/llama`
3. `pip install -e .`
4. Go back to `executorch` root, run `python3 -m examples.portable.scripts.export --model_name="llama2"`. The exported program, llama2.pte would be saved in current directory
2 changes: 1 addition & 1 deletion examples/portable/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@ We will walk through an example model to generate a `.pte` file in [portable mod
from the [`models/`](../models) directory using scripts in the `portable/scripts` directory. Then we will run on the `.pte` model on the ExecuTorch runtime. For that we will use `executor_runner`.


1. Following the setup guide in [Setting up ExecuTorch from GitHub](../../docs/source/getting-started-setup.md)
1. Following the setup guide in [Setting up ExecuTorch](https://pytorch.org/executorch/stable/getting-started-setup)
you should be able to get the basic development environment for ExecuTorch working.

2. Using the script `portable/scripts/export.py` generate a model binary file by selecting a
Expand Down
2 changes: 1 addition & 1 deletion examples/portable/custom_ops/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@ This folder contains examples to register custom operators into PyTorch as well

## How to run

Prerequisite: finish the [setting up wiki](../../../docs/source/getting-started-setup.md).
Prerequisite: finish the [setting up wiki](https://pytorch.org/executorch/stable/getting-started-setup).

Run:

Expand Down
2 changes: 1 addition & 1 deletion examples/qualcomm/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ Here are some general information and limitations.

## Prerequisite

Please finish tutorial [Setting up executorch](../../docs/source/getting-started-setup.md).
Please finish tutorial [Setting up executorch](https://pytorch.org/executorch/stable/getting-started-setup).

Please finish [setup QNN backend](../../backends/qualcomm/setup.md).

Expand Down
2 changes: 1 addition & 1 deletion examples/sdk/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ examples/sdk
We will use an example model (in `torch.nn.Module`) and its representative inputs, both from [`models/`](../models) directory, to generate a [BundledProgram(`.bp`)](../../docs/source/sdk-bundled-io.md) file using the [script](scripts/export_bundled_program.py). Then we will use [sdk_example_runner](sdk_example_runner/sdk_example_runner.cpp) to execute the `.bp` model on the ExecuTorch runtime and verify the model on BundledProgram API.


1. Sets up the basic development environment for ExecuTorch by [Setting up ExecuTorch from GitHub](../../docs/source/getting-started-setup.md).
1. Sets up the basic development environment for ExecuTorch by [Setting up ExecuTorch from GitHub](https://pytorch.org/executorch/stable/getting-started-setup).

2. Using the [script](scripts/export_bundled_program.py) to generate a BundledProgram binary file by retreiving a `torch.nn.Module` model and its representative inputs from the list of available models in the [`models/`](../models) dir。

Expand Down
2 changes: 1 addition & 1 deletion examples/selective_build/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@ To optimize binary size of ExecuTorch runtime, selective build can be used. This

## How to run

Prerequisite: finish the [setting up wiki](../../docs/source/getting-started-setup.md).
Prerequisite: finish the [setting up wiki](https://pytorch.org/executorch/stable/getting-started-setup).

Run:

Expand Down

0 comments on commit 51029c6

Please sign in to comment.