Vitis Compiler Command

This section describes the Vitis compiler command, v++, and the various options it supports for both compiling and linking FPGA binary.

The Vitis compiler is a standalone command line utility for both compiling kernel accelerator functions into Xilinx object (XO) files, and linking them with other XO files and supported platforms to build an FPGA binary.

For additional information about the use of the v++ command options for compile, link, packaging, and general processes, see these additional sections:

Vitis Compiler General Options

The Vitis compiler supports many options for both the compilation process and the linking process. These options provide a range of features, and some apply specifically to compile or link, while others can be used, or are required for both compile and link.

TIP: All Vitis compiler options can be specified in a configuration file for use with the --config option, as discussed in the Vitis Compiler Configuration File. For example, the --platform option can be specified in a configuration file without a section head using the following syntax:
platform=xilinx_u200_xdma_201830_2

--advanced

Applies to
Compile and link

Specify parameters and properties for use by the v++ command. See --advanced Options for more information.

--board_connection

Applies to
Compile and link
--board_connection

Specifies a dual in-line memory module (DIMM) board file for each DIMM connector slot. The board is specified using the Vendor:Board:Name:Version (vbnv) attribute of the DIMM card as it appears in the board repository.

For example:

<DIMM_connector>:<vbnv_of_DIMM_board>

-c | --compile

Applies to
Compile
--compile

Required for compilation, but mutually exclusive with --link and --package. Run v++ -c to generate XO files from kernel source files.

--clock

Applies to
Link

Provide a method for assigning clocks to kernels during the linking process. See --clock Options for more information.

--config

Applies to
Compile, link, and package
--config <config_file> ...

Specifies a configuration file containing v++ command options. The configuration file can be used to capture compilation, linking, or packaging strategies, that can be easily reused by referring to the config file on the v++ command line. In addition, the config file allows the v++ command line to be shortened to include only the options that are not specified in the config file. Refer to the Vitis Compiler Configuration File for more information.

TIP: Multiple configuration files can be specified on the v++ command line. A separate --config switch is required for each file used. For example:
v++ -l --config cfg_connectivity.cfg --config cfg_vivado.cfg ...

--connectivity

Applies to
Link

Used to specify important architectural details of the device binary during the linking process. See --connectivity Options for more information.

--custom_script

Applies to
Compile and link
--custom_script <kernel_name>:<file_name>

This option lets you specify custom Tcl scripts to be used in the build process during compilation or linking. Use with the --export_script option to create, edit, and run the scripts to customize the build process.

When used with the v++ --compile command, this option lets you specify a custom HLS script to be used when compiling the specified kernel. The script lets you modify or customize the Vitis HLS tool. Use the --export_script option to extract a Tcl script Vitis HLS uses to compile the kernel, modify the script as needed, and resubmit using the --custom_script option to better manage the kernel build process.

The argument lets you specify the kernel name, and path to the Tcl script to apply to that kernel. For example:
v++ -c -k kernel1 -export_script ...
*** Modify the exported script to customize in some way, then resubmit. ****
v++ -c --custom_script kernel1:./kernel1.tcl ...

When used with the v++ --link command for the hardware build target (-t hw), this option lets you specify the absolute path to an edited run_script_map.dat file. This file contains a list of steps in the build process, and Tcl scripts that are run by the Vitis and Vivado tools during those steps. You can edit run_script_map.dat to specify custom Tcl scripts to run at those steps in the build process. You must use the following steps to customize the Tcl scripts:

  1. Run the build process specifying the --export_script option as follows:
    v++ -t hw -l -k kernel1 -export_script ...
  2. Copy the Tcl scripts referenced in the run_script_map.dat file for any of the steps you want to customize. For example, copy the Tcl file specified for the synthesis run, or the implementation run. You must copy the file to a separate location, outside of the project build structure.
  3. Edit the Tcl script to add or modify any of the existing commands to create a new custom Tcl script.
  4. Edit the run_script_map.dat file to point a specific implementation step to the new custom script.
  5. Relaunch the build process using the --custom_script option, specifying the absolute path to the run_script_map.dat file as shown below:
    v++ -t hw -l -k kernel1 -custom_script /path/to/run_script_map.dat
IMPORTANT: When editing a custom synthesis run script, you must either comment out the lines related to the dont_touch.xdc file, or edit the lines to point to a new user-specified dont_touch.xdc file. The specific lines to comment or edit are shown below:
read_xdc dont_touch.xdc
set_property used_in_implementation false [get_files dont_touch.xdc]

The synthesis run returns an error related to a missing dont_touch.xdc file if this is not done.

--debug

Applies to
Link

Specify debug IP core insertion in the device binary (.xclbin). See --debug Options for more information.

-D | --define

Applies to
Compile and link
--define <arg>

Valid macro name and definition pair: <name>=<definition>.

Predefine name as a macro with definition. This option is passed to the v++ pre-processor.

--export_script

Applies to
Compile and link
--export_script

This option runs the build process up to the point of exporting a script file, or list of script files, and then stops execution. The build process must be completed using the --custom_script option. This lets you edit the exported script, or list of scripts, and then rerun the build using your custom scripts.

When used with the v++ --compile command, this option exports a Tcl script for the specified kernel, <kernel_name>.tcl, that can be used to execute Vitis HLS, but stops the build process before actually launching the HLS tool. This lets you interrupt the build process to edit the generated Tcl script, and then restart the build process using the --custom_script option, as shown in the following example:

v++ -c -k kernel1 -export_script ...
TIP: This option is not supported for software emulation (–t sw_emu) of OpenCL kernels.

When used with the v++ --link command for the hardware build target (-t hw), this option exports a run_script_map.dat file in the current directory. This file contains a list of steps in the build process, and Tcl scripts that are run by the Vitis and Vivado tools during those steps. You can edit the specified Tcl scripts, customizing the build process in those scripts, and relaunch the build using the --custom_script option. Export the run_script_map.dat file using the following command:

v++ -t hw -l -k kernel1 -export_script ...

--from_step

Applies to
Compile and link
--from_step <arg>

Specifies a step name for the Vitis compiler build process, to start the build process from that step. If intermediate results are available, the link process fast forwards and begins execution at the named step if possible. This allows you to run the build through a --to_step, and then resume the build process at the --from_step, after interacting with your project in some method. You can use the --list_step option to determine the list of valid steps.

IMPORTANT: The --from_step and --to_step options are sequential build options that require you to use the same project directory when launching the Vitis compiler using --from_step to resume the build as you specified when using --to_step to start the build.

For example:

v++ --link --from_step vpl.update_bd

-g

Applies to
Compile and link
-g

Generates code for debugging the kernel during software emulation. Using this option adds features to facilitate debugging the kernel as it is compiled.

For example:

v++ -g ...

-h | --help

-h

Prints the help contents for the v++ command. For example:

v++ -h

--hls

Applies to
Compile

Specify options for the Vitis HLS synthesis process during kernel compilation. See --hls Options for more information.

-I | --include

Applies to
Compile and link
--include <arg>

Add the specified directory to the list of directories to be searched for header files. This option is passed to the Vitis compiler pre-processor.

<input_file>

Applies to
Compile and link
<input_file1> <input_file2> ...

Specifies an OpenCL or C/C++ kernel source file for v++ compilation, or Xilinx object (XO) files for v++ linking.

For example:

v++ -l kernel1.xo kernelRTL.xo ...

--interactive

Applies to
Compile and link
--interactive [ impl ]

v++ configures the needed environment and launches the Vivado tool with the implementation project.

Because you are interactively launching the Vivado tool, the linking process is stopped after the vpl step, which is the equivalent of using the --to_step vpl option in your v++ command.

When you are done interactively working with the Vivado tool, and you save the design checkpoint (DCP), you can resume the Vitis compiler linking process using the v++ --from_step rtdgen, or use the --reuse_impl or --reuse_bit options to read in the implemented DCP file or bitstream.

For example:

v++ --interactive impl
## Interactively use the Vivado tool
v++ --from_step rtdgen

-k | --kernel

Applies to
Compile
--kernel <arg>

Compile only the specified kernel from the input file. Only one -k option is allowed per v++ command. Valid values include the name of the kernel to be compiled from the input .cl or .c/.cpp kernel source code.

This is required for C/C++ kernels, but is optional for OpenCL kernels. OpenCL uses the kernel keyword to identify a kernel. For C/C++ kernels, you must identify the kernel by -k or --kernel.

When an OpenCL source file is compiled without the -k option, all the kernels in the file are compiled. Use -k to target a specific kernel.

For example:

v++ -c --kernel vadd

--kernel_frequency

IMPORTANT: This command is used for legacy platforms with changeable clocks, and is replaced by the --clock Options command for newer platform shells. Refer to Managing Clock Frequencies for more information.
Applies to
Compile and link
--kernel_frequency <freq> | <clockID>:<freq>[<clockID>:<freq>]

Specifies a user-defined clock frequency (in MHz) for the kernel, overriding the default clock frequency defined on the hardware platform. The <freq> specifies a single frequency for kernels with only a single clock, or can be used to specify the <clockID> and the <freq> for kernels that support two clocks.

The syntax for overriding the clock on a platform with only one kernel clock, is to simply specify the frequency in MHz:

v++ --kernel_frequency 300

To override a specific clock on a platform with two clocks, specify the clock ID and frequency:

v++ --kernel_frequency 0:300

To override both clocks on a multi-clock platform, specify each clock ID and the corresponding frequency. For example:

v++ --kernel_frequency 0:300|1:500

-l | --link

--link

This is a required option for the linking process, which follows compilation, but is mutually exclusive with --compile or --package. Run v++ in link mode to link XO input files and generate an xclbin output file.

--linkhook

Applies to
Link

Lets you customize the build process for the device binary by specifying Tcl scripts to be run at specific steps in the implementation flow. See --linkhook Options for more information.

--list_steps

Applies to
Compile and link
--list_steps

List valid run steps for a given target. This option returns a list of steps that can be used in the --from_step or --to_step options. The command must be specified with the following options:

  • -t | --target [sw_emu | hw_emu | hw ]:
  • [ --compile | --link ]: Specifies the list of steps from either the compile or link process for the specified build target.

For example:

v++ -t hw_emu --link --list_steps

--log_dir

Applies to
Compile and link
--log_dir <dir_name>

Specifies a directory to store log files into. If --log_dir is not specified, the tool saves the log files to ./_x/logs. Refer to Output Directories of the v++ Command for more information.

For example:

v++ --log_dir /tmp/myProj_logs ...

--message_rules

Applies to
Compile and link
--message-rules <file_name>

Specifies a message rule file with rules for controlling messages. Refer to Using the Message Rule File for more information.

For example:

v++ --message_rules ./minimum_out.mrf ...

--no_ip_cache

Applies to
Compile and link
--no_ip_cache

Disables the IP cache for out-of-context (OOC) synthesis for Vivado Synthesis. Disabling the IP cache repository requires the tool to regenerate the IP synthesis results for every build, and can increase the build time. However, it also results in a clean build, eliminating earlier results for IP in the design.

For example:

v++ --no_ip_cache ...

-O | --optimize

Applies to
Compile and link
--optimize <arg>

This option specifies the optimization level of the Vivado implementation results. Valid optimization values include the following:

  • 0: Default optimization. Reduces compilation time.
  • 1: Optimizes to reduce power consumption by running Vivado implementation strategy Power_DefaultOpt. This takes more time to build the design.
  • 2: Optimizes to increase kernel speed. This option increases build time, but also improves the performance of the generated kernel by adding the PHYS_OPT_DESIGN step to implementation.
  • 3: This optimization provides the highest level performance in the generated code, but compilation time can increase considerably. This option specifies retiming during synthesis, and enables both PHYS_OPT_DESIGN and POST_ROUTE_PHYS_OPT_DESIGN during implementation.
  • s: Optimizes the design for size. This reduces the logic resources of the device used by the kernel by running the Area_Explore implementation strategy.
  • quick: Reduces Vivado implementation time, but can reduce kernel performance, and increases the resources used by the kernel. This enables the Flow_RuntimeOptimized strategy for both synthesis and implementation.

For example:

v++ --link --optimize 2

-o | --output

Applies to
Compile and link
-o <output_name>

Specifies the name of the output file generated by the v++ command. The compilation (-c) process output name must end with the XO file suffix, for Xilinx object file. The linking (-l) process output file must end with the xclbin file suffix, for Xilinx executable binary.

For example:

v++ -o krnl_vadd.xo

If --o or --output are not specified, the output file names default to the following:

  • a.o for compilation.
  • a.xclbin for linking.

-p | --package

Applies to
Package

Specify options for the Vitis compiler to package your design for either running emulation or running on hardware. See --package Options for more information.

-f | --platform

Applies to
Compile and link
--platform <platform_name>

Specifies the name of a supported acceleration platform as specified by the $PLATFORM_REPO_PATHS environment variable, or the full path to the platform .xpfm file. For a list of supported platforms for the release, see the Vitis Software Platform Release Notes.

This is a required option for both compilation and linking, to define the target Xilinx platform of the build process. The --platform option accepts either a platform name, or the path to a platform file xpfm, using the full or relative path.

IMPORTANT: The specified platform and build targets for compiling and linking must match. The --platform and -t options specified when the XO file is generated by compilation, must be the --platform and -t used during linking. For more information, see platforminfo Utility.

For example:

v++ --platform xilinx_u200_xdma_201830_2 ...
TIP: All Vitis compiler options can be specified in a configuration file for use with the --config option. For example, the platform option can be specified in a configuration file without a section head using the following syntax:
platform=xilinx_u200_xdma_201830_2

--profile

Applies to
Compile and link

Specify options to configure the Xilinx runtime environment to capture application performance information. See --profile Options for more information.

--remote_ip_cache

Applies to
Compile and link
--remote_ip_cache <dir_name>

Specifies the location of the remote IP cache directory for Vivado Synthesis to use during out-of-context (OOC) synthesis of IP. OOC synthesis lets the Vivado synthesis tool reuse synthesis results for IP that have not been changed in iterations of a design. This can reduce the time required to build your .xclbin files, due to reusing synthesis results.

When the --remote_ip_cache option is not specified the IP cache is written to the current working directory from which v++ was launched. You can use this option to provide a different cache location, used across multiple projects for instance.

For example:

v++ --remote_ip_cache /tmp/IP_cache_dir ...

--report_dir

Applies to
Compile and link
--report_dir <dir_name>

Specifies a directory to store report files into. If --report_dir is not specified, the tool saves the report files to ./_x/reports. Refer to Output Directories of the v++ Command for more information.

For example:

v++ --report_dir /tmp/myProj_reports ...

-R | --report_level

Applies to
Compile and link
--report_level <arg>

Valid report levels: 0, 1, 2, estimate.

These report levels have mappings kept in the optMap.xml file. You can override the installed optMap.xml to define custom report levels.

  • -R0 specification turns off all intermediate design checkpoint (DCP) generation during Vivado implementation. Turns on post-route timing report generation.
  • The -R1 specification includes everything from -R0, plus report_failfast pre-opt_design, report_failfast post-opt_design, and enables all intermediate DCP generation.
  • The -R2 specification includes everything from -R1, plus report_failfast post-route_design.
  • The -Restimate specification forces Vitis HLS to generate a design.xml file if it does not exist and then generates a System Estimate report, as described in System Estimate Report.
    TIP: This option is useful for the software emulation build (-t sw_emu), when design.xml is not generated by default.

For example:

v++ -R2 ... 

--reuse_bit

--reuse_bit <arg>
Applies to
Link

Specifies the path and file name of generated bitstream file (.bit) to use when generating the device binary (xclbin) file. As described in Using -to_step and Launching Vivado Interactively, you can specify the --to_step option to interrupt the Vitis build process and manually place and route a synthesized design to generate the bitstream.

IMPORTANT: The --reuse_bit option is a sequential build option that requires you to use the same project directory when resuming the Vitis compiler with --reuse_bit that you specified when using --to_step to start the build.

For example:

v++ --link --reuse_bit ./project.bit

--reuse_impl

--reuse_impl <arg>
Applies to
Link

Specifies the path and file name of an implemented design checkpoint (DCP) file to use when generating the device binary (xclbin) file. The link process uses the specified implemented DCP to extract the FPGA bitstream and generates the xclbin. You can manually edit theVivado project created by a previously completed Vitis build, or specify the --to_step option to interrupt the Vitis build process and manually place and route a synthesized design, for instance. This allows you to work interactively with Vivado Design Suite to change the design and use DCP in the build process.

IMPORTANT: The --reuse_impl option is an incremental build option that requires you to use the same project directory when resuming the Vitis compiler with --reuse_impl that you specified when using --to_step to start the build.

For example:

v++ --link --reuse_impl ./manual_design.dcp

-s | --save-temps

Applies to
Compile and link
--save-temps

Directs the v++ command to save intermediate files/directories created during the compilation and link process. Use the --temp_dir option to specify a location to write the intermediate files to.

TIP: This option is useful for debugging when you encounter issues in the build process.

For example:

v++ --save_temps ...

-t | --target

Applies to
Compile and link
-t [ sw_emu | hw_emu | hw ]

Specifies the build target, as described in Build Targets. The build target determines the results of the compilation and linking processes. You can choose to build an emulation model for debug and test, or build the actual system to run in hardware. The build target defaults to hw if -t is not specified.

IMPORTANT: The specified platform and build targets for compiling and linking must match. The --platform and -t options specified when the XO file is generated by compilation must be the --platform and -t used during linking.

The valid values are:

  • sw_emu: Software emulation
  • hw_emu: Hardware emulation
  • hw: Hardware

For example:

v++ --link -t hw_emu

--temp_dir

Applies to
Compile and link
--temp_dir <dir_name>

This allows you to manage the location where the tool writes temporary files created during the build process. The temporary results are written by the v++ compiler, and then removed, unless the --save-temps option is also specified.

If --temp_dir is not specified, the tool saves the temporary files to ./_x/temp. Refer to Output Directories of the v++ Command for more information.

For example:

v++ --temp_dir /tmp/myProj_temp ...

--to_step

Applies to
Compile and link
--to_step <arg>

Specifies a step name, for either the compile or link process, to run the build process through that step. You can use the --list_step option to determine the list of valid compile or link steps.

The build process terminates after completing the named step. At this time, you can interact with the build results. For example, manually accessing the HLS project or the Vivado Design Suite project to perform specific tasks before returning to the build flow, launch the v++ command with the --from_step option.

IMPORTANT: The --to_step and --from_step options are incremental build options that require you to use the same project directory when launching the Vitis compiler using --from_step to resume the build as you specified when using --to_step to start the build.

You must also specify --save-temps when using --to_step to preserve the temporary files required by the Vivado tools. For example:

v++ --link --save-temps --to_step vpl.update_bd

--trace_memory

Applies to
Link
--trace_memory <arg>

The -trace_memory option applies to hardware build targets (-t=hw) only, and should not be used for software or hardware emulation flows. Use with the --profile.xxx option as described in --profile Options when linking with hardware target, to specify the type and amount of memory to use for capturing trace data.

<FIFO>:<size>|<MEMORY>[<n>] specifies trace buffer memory type for profiling.

  • FIFO:<size>: Specified in KB. Default is FIFO:8K. The maximum is 4G.
  • Memory[<N>]: Specifies the type and number of memory resource on the platform. Memory resources for the target platform can be identified with the platforminfo command. Supported memory types include HBM, DDR, PLRAM, HP, ACP, MIG, and MC_NOC. For example, DDR[1].
IMPORTANT: When using --trace_memory during the linking step, you should also use the [Debug] trace_buffer_size in the xrt.ini file as described in xrt.ini File.

-v | --version

-v

Prints the version and build information for the v++ command. For example:

v++ -v

--vivado

Applies to
Link

Specify properties and parameters to configure the Vivado synthesis and implementation environment prior to building the device binary. See --vivado Options for more information.

--user_board_repo_paths

Applies to
Compile and link
--user_board_repo_paths

Specifies an existing user board repository for DIMM board files. This value is pre-pended to the board_part_repo_paths property of the Vivado project.

--user_ip_repo_paths

Applies to
Compile and link
--user_ip_repo_paths <repo_dir>

Specifies the directory location of one or more user IP repository paths to be searched first for IP used in the kernel design. This value is appended to the start of the ip_repo_paths used by the Vivado tool to locate IP cores. IP definitions from these specified paths are used ahead of IP repositories from the hardware platform (.xsa) or from the Xilinx IP catalog.

TIP: Multiple --user_ip_repo_paths can be specified on the v++ command line.

The following lists show the priority order in which IP definitions are found during the build process, from high to low. Note that all of these entries can possibly include multiple directories in them.

  • For the system hardware build (-t hw):
    1. IP definitions from --user_ip_repo_paths.
    2. Kernel IP definitions (vpl --iprepo switch value).
    3. IP definitions from the IP repository associated with the platform.
    4. IP cache from the installation area (for example, <Install_Dir>/Vitis/2019.2/data/cache/).
    5. Xilinx IP catalog from the installation area (for example, <Install_Dir>/Vitis/2019.2/data/ip/)
  • For the hardware emulation build (-t hw_emu):
    1. IP definitions and User emulation IP repository from --user_ip_repo_paths.
    2. Kernel IP definitions (vpl --iprepo switch value).
    3. IP definitions from the IP repository associated with the platform.
    4. IP cache from the installation area (for example, <Install_Dir>/Vitis/2019.2/data/cache/).
    5. $::env(XILINX_VITIS)/data/emulation/hw_em/ip_repo
    6. $::env(XILINX_VIVADO)/data/emulation/hw_em/ip_repo
    7. Xilinx IP catalog from the installation area (for example, <Install_Dir>/Vitis/2019.2/data/ip/)

For example:

v++ --user_ip_repo_paths ./myIP_repo ...

--advanced Options

The --advanced.param and --advanced.prop options specify parameters and properties for use by the v++ command. When compiling or linking, these options offer fine-grain control over the hardware generated by the Vitis core development kit, and the hardware emulation process.

The arguments for the --advanced.xxx options are specified as <param_name>=<param_value>. For example:

v++ --link -–advanced.param compiler.enableXSAIntegrityCheck=true 
-–advanced.prop kernel.foo.kernel_flags="-std=c++0x"
TIP: All Vitis compiler options can be specified in a configuration file for use with the --config option, as discussed in Vitis Compiler Configuration File. For example, the --platform option can be specified in a configuration file without a section head using the following syntax:
platform=xilinx_u200_xdma_201830_2

--advanced.param

--advanced.param <param_name>=<param_value>

Specifies advanced parameters as described in the table below.

Table 1. Param Options
Parameter Name Valid Values Description
compiler.acceleratorBinaryContent Type: String

Default Value: <empty>

Content to insert in xclbin. Valid options include bitstream, pdi, dcp, dcp, bitstream and dcp, pdi.

Used while building the hardware target, this option applies to:

  • v++ --link
  • vpl.impl
  • xclbinutil

bitstream and pdi are mutually exclusive. pdi applies to Versal platforms, bitstreamapplies to non-Versal platforms.

TIP: When this parameter is set to dcp, bitstream, or pdi, v++ generates two xclbin files: one containing a DCP file, the other containing a bitstream or PDI file.
compiler.addOutputTypes Type: String

Default Value: <empty>

Additional output types produced by the Vitis compiler. Valid values include: xclbin and hw_export. Use hw_export to create a fixed XSA from dynamic hardware platforms for use in the Embedded Software Development Flow.

Applies to:

  • v++ --link
  • vpl.impl
  • XSA generation
compiler.axiDeadLockFree Type: Boolean

Default Value: TRUE

Avoid dead locks. This option is enabled by default for Vitis HLS.
compiler.​deadlockDetection Type: Boolean

Default Value: FALSE

Enables detection of kernel deadlocks during the simulation run as part of hardware emulation. The tool posts an Error message to the console and the log file when the application is deadlocked:
// ERROR!!! DEADLOCK DETECTED at 42979000 ns! SIMULATION WILL BE STOPPED! //

The message is repeated until the deadlock is terminated. You must manually terminate the application to end the deadlock condition.

TIP: When deadlocks are encountered during simulation, you can open the kernel code in Vitis HLS as described in Compiling Kernels with Vitis HLS for additional deadlock detection and debug capability.

Applies to:

  • v++ --compile
  • Vitis HLS
  • config_export
compiler.enableIncrHwEmu Type: Boolean

Default Value: FALSE

Use to enable incremental compilation of the hardware emulation xclbin when there are minor changes made to the platform. This enables a quick rebuild of the device binary for hardware emulation when the platform has been updated.

Applies to:

  • v++ --link
  • vpl.impl
compiler.​errorOnHoldViolation Type: Boolean

Default Value: TRUE

After the last step of Vivado implementation, during timing analysis check, and clock scaling if needed. If hold violations are found, v++ quits and returns an error by default, and does not generate an xclbin. This parameter lets you over ride the default behavior.

Applies to:

  • v++ --link
  • vpl.impl
compiler.​fsanitize Type: String

Default Value: <empty>

Enables additional memory access checks for OpenCL kernels as described in Debugging OpenCL Kernels. Valid values include: address, memory.

Applies to Software Emulation and Debug.

compiler.​interfaceRdBurstLen Type: Int Range

Default Value: 0

Specifies the expected length of AXI read bursts on the kernel AXI interface. This is used with option compiler.interfaceRdOutstanding to determine the hardware buffer sizes. Values are 1 through 256.

Applies to:

  • v++ --compile
  • Vitis HLS
  • config_interface
compiler.​interfaceWrBurstLen Type: Int Range

Default Value: 0

Specifies the expected length of AXI write bursts on the kernel AXI interface. This is used with option compiler.interfaceWrOutstanding to determine the hardware buffer sizes. Values are 1 through 256.

Applies to:

  • v++ --compile
  • Vitis HLS
  • config_interface
compiler.​interfaceRdOutstanding Type: Int Range

Default Value: 0

Specifies how many outstanding reads to buffer are on the kernel AXI interface. Values are 1 through 256.

Applies to:

  • v++ --compile
  • Vitis HLS
  • config_interface
compiler.​interfaceWrOutstanding Type: Int Range

Default Value: 0

Specifies how many outstanding writes to buffer are on the kernel AXI interface. Values are 1 through 256.

Applies to:

  • v++ --compile
  • Vitis HLS
  • config_interface
compiler.​maxComputeUnits Type: Int

Default Value: -1

Maximum compute units allowed in the system. The default is 60 compute units, or is specified in the hardware platform (.xsa) with the numComputeUnits property.

The specified value overrides the default value or the hardware platform. The default value of -1 preserves the default.

Applies to v++ --link.

compiler.skipTimingCheckAndFrequencyScaling Type: Boolean

Default Value: FALSE

This parameter causes the Vivado tool to skip the timing check and optional clock frequency scaling that occurs after the last step of implementation process, which is either route_design or post-route phys_opt_design.

Applies to:

  • v++ --link
  • vpl.impl
​compiler.userPreCreateProjectTcl Type: String

Default Value: <empty>

Specifies a Tcl script to run before creating the Vivado project in the Vitis build process.

Applies to:

  • v++ --link
  • vpl.create_project
​compiler.userPreSysLinkOverlayTcl Type: String

Default Value: <empty>

Specifies a Tcl script to run after opening the Vivado IP integrator block design, before running the compiler-generated dr.bd.tcl script in the Vitis build process.

Applies to:

  • v++ --link
  • vpl.create_bd
​compiler.userPostSysLinkOverlayTcl Type: String

Default Value: <empty>

Specifies a Tcl script to run after running the compiler-generated dr.bd.tcl script.

Applies to:

  • v++ --link
  • vpl.update_bd
​compiler.userPostDebugProfileOverlayTcl Type: String

Default Value: <empty>

Specifies a Tcl script to run after debug profile overlay insertion in Vivado IP integrator block design in the vpl.update_bd step.

Applies to:

  • v++ --link
  • vpl.updated_bd
compiler.​worstNegativeSlack Type: Float

Default Value: 0

During timing analysis check, this specifies the worst acceptable negative slack for the design, specified in nanoseconds (ns). When negative slack exceeds the specified value, the tool might try to scale the clock frequency to achieve timing results. This specifies an acceptable negative slack value instead of zero slack.

Applies to:

  • v++ --link
  • vpl.impl
compiler.​xclDataflowFifoDepth Type: Int

Default Value: -1

Specifies the depth of FIFOs used in kernel data flow region.

Applies to:

  • v++ --compile
  • Vitis HLS
  • config_dateflow
hw_emu.aie_shim_sol_path Type: String

Default Value: <empty>

For use by Versal platforms, this option specifies the path to the AI Engine SHIM Solution constraints file which is generated by the aiecompiler.

Used during simulation, compilation, and elaboration, the file provides a logical mapping to the physical interface. This is needed for third-party simulators like Mentor Graphics Questa Advanced Simulator or Cadence Xcelium Logic Simulation.

hw_emu.​compiledLibs Type: String

Default Value: <empty>

Uses mentioned clibs for the specified simulator.

Applies to Hardware Emulation and Debug.

hw_emu.​debugMode wdb

Default Value: wdb

The default value is WDB and runs simulation in waveform mode.

This option only works in combination with the -g or --debug options.

Applies to Hardware Emulation and Debug.

hw_emu.​enableProtocolChecker Type: Boolean

Default Value: FALSE

Enables the lightweight AXI protocol checker (lapc) during HW emulation. This is used to confirm the accuracy of any AXI interfaces in the design.

Applies to Hardware Emulation and Debug.

hw_emu.json_device_file_path Type: String

Default Value: <empty>

For use by Versal platforms, this option specifies the path to the AI Engine JSON Device file located in the Vitis software installation area.

Used during simulation, compilation, and elaboration, the file specifies the size of the AI Engine array. This is needed for third-party simulators like Mentor Graphics Questa Advanced Simulator or Cadence Xcelium Logic Simulation.

hw_emu.platformPath Type: String

Default Value: <empty>

Specifies the path to the custom platform directory. The <platformPath> directory should meet the following requirements to be used in platform creation:
  • The directory should contain a subdirectory called ip_repo.
  • The directory should contain a subdirectory called scripts and this scripts directory should contain a hw_em_util.tcl file. The hw_em_util.tcl file should have the following two procedures defined in it:
    • hw_em_util::add_base_platform
    • hw_em_util::generate_simulation_scripts_and_compile

Applies to Hardware Emulation and Debug.

hw_emu.reduceHwEmuCompileTime Type: Boolean

Default Value: FALSE

Move the generation of the top-level block design into the Generate Targets step of v++ --link.

Applies to Hardware Emulation and Debug.

hw_emu.post_sim_settings Type: String Specifies the path to a Tcl script that is used to configure the settings of the Vivado simulator prior to running hardware emulation. This script is run after the default configuration of the tool, but prior to launching simulation. You can use the Tcl script to override specific settings, or to custom configure the simulator as needed.

Applies to Hardware Emulation and Debug.

hw_emu.​scDebugLevel none | waveform | log | waveform_and_log

Default Value: waveform_and_log

Sets the TLM transaction debug level of the Vivado logic simulator (xsim).
  • NONE to disable TLM debug
  • LOG to dump TLM transaction log info into report file
  • WAVEFORM for enabling the TLM transaction waveform view
  • WAVEFORM_AND_LOG for both the Log Messages and Waveform view

Applies to Hardware Emulation and Debug.

hw_emu.simulator XSIM | QUESTA

Default Value: XSIM

Uses the specified simulator for the hardware emulation run.

Applies to Hardware Emulation and Debug.

For example:
--advanced.param compiler.addOutputTypes="hw_export"
TIP: This option can be specified in a configuration file under the [advanced] section head using the following format:
[advanced]
param=compiler.addOutputTypes="hw_export"

--advanced.prop

--advanced.prop <arg>

Specifies advanced kernel or solution properties for kernel compilation where <arg> is one of the values described in the table below.

Table 2. Prop Options
Property Name Valid Values Description
kernel.<kernel_name>.​kernel_flags Type: String

Default Value: <empty>

Sets specific compile flags on the kernel <kernel_name>.
solution.​device_repo_path Type: String

Default Value: <empty>

Specifies the path to a repository of hardware platforms. The --platform option with full path to the .xpfm platform file should be used instead.
solution.​kernel_compiler_margin Type: Float

Default Value: 12.5% of the kernel clock period.

The clock margin (in ns) for the kernel. This value is subtracted from the kernel clock period prior to synthesis to provide some margin for place and route delays.

--advanced.misc

--advanced.misc <arg>

Specifies advanced tool directives for kernel compilation.

--clock Options

IMPORTANT: The --clock options described here are supported on embedded processor platforms and newer platforms for Data Center accelerator cards, as described in Managing Clock Frequencies.

You can specify the --clock option using either a clock ID from the platform shell, or by specifying a frequency for the kernel clock. When specifying the clock ID, the kernel frequency is defined by the frequency of that clock ID on the platform. When specifying the kernel frequency, the platform attempts to create the specified frequency by scaling one of the available fixed platform clocks. In some cases, the clock frequency can only be achieved in some approximation, and you can specify the --clock.tolerance or --clock.default_tolerance to indicate an acceptable range. If the available fixed clock cannot be scaled within the acceptable tolerance, a warning is issued and the kernel is connected to the default clock.

The --clock.XXX options provide a method for assigning clocks to kernels from the v++ command line and locating the required kernel clock frequency source during the linking process. There are a number of options that can be used with increasing specificity. The order of precedence is determined by how specific a clock option is. The rules are listed in order from general to specific, where more specific rules take precedence over general rules:

  • When no --clock.XXX option is specified, the platform default clock is applied to each compute unit (CU). For kernels with two clocks, clock ID 0 from the platform is assigned to ap_clk, and clock ID 1 is assigned to ap_clk_2.
  • Specifying --clock.defaultId=<id> defines a specific clock ID for all kernels, overriding the platform default clock assignments.
  • Specifying --clock.defaultFreqHz=<Hz> defines a specific clock frequency for all kernels that overrides a user specified default clock ID, and the platform default clock.
  • Specifying --clock.id=<id>:<cu_0>[.<clk_pin_0>][,<cu_n>[.<clk_pin_n>]] assigns a clock ID to a list of associated CUs, and optionally the clock pin for the CU.
  • Specifying --clock.freqHz=<Hz>:<cu_0>[.<clk_pin_0>][,<cu_n>[.<clk_pin_n>]] assigns the specified clock frequency to a list of associated CUs, and optionally the clock pin for the CU.

--clock.defaultFreqHz

--clock.defaultFreqHz <arg>

Specifies a default clock frequency in Hz to use for all kernels. This lets you override the default platform clock and assign the specified clock frequency as the default. <arg> is specified as the clock frequency in Hz.

For example:

v++ --link --clock.defaultFreqHz 300000000
TIP: This option can be specified in a configuration file under the [clock] section head using the following format:
[clock]
defaultFreqHz=300000000

--clock.defaultId

--clock.defaultId <arg>
Specifying --clock.defaultId=<id> defines a specific clock ID for all kernels, overriding the platform default clock. <arg> is specified as the clock ID from one of the clocks defined on the target platform, other than the default clock ID.
TIP: You can determine the available clock IDs and clock status for a target platform using the platforminfo -v command as described in platforminfo Utility.

For example:

v++ --link --clock.defaultId 1
TIP: This option can be specified in a configuration file under the [clock] section head using the following format:
[clock]
defaultId=1

--clock.defaultTolerance

--clock.defaultTolerance <arg>

Specifies a default clock tolerance as a value, or as a percentage of the default clock frequency. When specifying clock.defaultFreqHz, you can also specify the tolerance with either a value or percentage. This updates the timing constraints to reflect the accepted tolerance.

The tolerance value, <arg>, can be specified as a whole number, indicating the clock.defaultFreqHz ± the specified tolerance; or as a percentage of the default clock frequency specified as a decimal value.

IMPORTANT: The default clock tolerance is 5% when this option is not specified.

For example:

v++ --link --clock.defaultFreqHz 300000000 --clock.defaultTolerance 0.10
TIP: This option can be specified in a configuration file under the [clock] section head using the following format:
[clock]
defaultTolerance=0.10

--clock.freqHz

--clock.freqHz <arg>

Specifies a clock frequency in Hz and assigns it to a list of associated compute units (CUs) and optionally specific clock pins on the CU. <arg> is specified as <frequency_in_Hz>:<cu_0>[.<clk_pin_0>][,<cu_n>[.<clk_pin_n>]]:

  • <frequency_in_Hz>: Defines the clock frequency specified in Hz.
  • <cu_0>[.<clk_pin_0>][,<cu_n>[.<clk_pin_n>]]: Applies the defined frequency to the specified CUs, and optionally to the specified clock pin on the CU.
For example:
v++ --link --clock.freqHz 300000000:vadd_1,vadd_3
TIP: This option can be specified in a configuration file under the [clock] section head using the following format:
[clock]
freqHz=300000000:vadd_1,vadd_3

--clock.id

--clock.id <arg>

Specifies an available clock ID from the target platform and assigns it to a list of associated compute units (CUs) and optionally specific clock pins on the CU. <arg> is specified as <reference_ID>:<cu_0>[.<clk_pin_0>][,<cu_n>[.<clk_pin_n>]]:

  • <reference_ID>: Defines the clock ID to use from the target platform.
    TIP: You can determine the available clock IDs for a target platform using the platforminfo utility as described in platforminfo Utility.
  • <cu_0>[.<clk_pin_0>][,<cu_n>[.<clk_pin_n>]]: Applies the defined frequency to the specified CUs and optionally to the specified clock pin on the CU.

For example:

v++ --link --clock.id 1:vadd_1,vadd_3
TIP: This option can be specified in a configuration file under the [clock] section head using the following format:
[clock]
id=1:vadd_1,vadd_3

--clock.tolerance

--clock.tolerance <arg>

Specifies a clock tolerance as a value, or as a percentage of the clock frequency. When specifying --clock.freqHz, you can also specify the tolerance with either a value or percentage. This updates the timing constraints to reflect the accepted tolerance. <arg> is specified as <tolerance>:<cu_0>[.<clk_pin_0>][,<cu_n>[.<clk_pin_n>]]

  • <tolerance>: Can be specified either as a whole number, indicating the clock.freqHz ± the specified tolerance value; or as a percentage of the clock frequency specified as a decimal value.
  • <cu_0>[.<clk_pin_0>][,<cu_n>[.<clk_pin_n>]]: Applies the defined clock tolerance to the specified CUs, and optionally to the specified clock pin on the CU.
IMPORTANT: The default clock tolerance is 5% of the clock.FreqHz when this option is not specified.

For example:

v++ --link --clock.tolerance 0.10:vadd_1,vadd_3 
TIP: This option can be specified in a configuration file under the [clock] section head using the following format:
[clock]
tolerance=0.10:vadd_1,vadd_3

--connectivity Options

As discussed in Linking the Kernels, there are a number of --connectivity.XXX options that let you define the topology of the FPGA binary, specifying the number of CUs, assigning them to SLRs, connecting kernel ports to global memory, and establishing streaming port connections. These commands are an integral part of the build process, critical to the definition and construction of the application.

--connectivity.nk

--connectivity.nk <arg>

Where <arg> is specified as <kernel_name>:#:<cu_name1>.<cu_name2>...<cu_name#>.

This instantiates the specified number of CU (#) for the specified kernel (kernel_name) in the generated FPGA binary (.xclbin) file during the linking process. The cu_name is optional. If the cu_name is not specified, the instances of the kernel are simply numbered: kernel_name_1, kernel_name_2, and so forth. By default, the Vitis compiler instantiates one compute unit for each kernel.

For example:

v++ --link --connectivity.nk vadd:3:vadd_A.vadd_B.vadd_C
TIP: This option can be specified in a configuration file under the [connectivity] section head using the following format:
[connectivity]
nk=vadd:3:vadd_A.vadd_B.vadd_C

--connectivity.sc

--connectivity.sc <arg>

Create a streaming connection between two compute units through their AXI4-Stream interfaces. Use a separate --connectivity.sc option for each streaming interface connection. The order of connection must be from a streaming output port of the first kernel to a streaming input port of the second kernel. Valid values include:

<cu_name>.<streaming_output_port>:<cu_name>.<streaming_input_port>[:<fifo_depth>]

Where:

  • <cu_name> is the compute unit name specified in the --connectivity.nk option. Generally this is <kernel_name>_1 unless a different name was specified.
  • <streaming_output_port>/<streaming_input_port> is the function argument for the compute unit port that is declared as an AXI4-Stream.
  • [:<fifo_depth>] inserts a FIFO of the specified depth between the two streaming ports to prevent stalls. The value is specified as an integer.

For example, to connect the AXI4-Stream port s_out of the compute unit mem_read_1 to AXI4-Stream port s_in of the compute unit increment_1, use the following:

--connectivity.sc mem_read_1.s_out:increment_1.s_in
TIP: This option can be specified in a configuration file under the [connectivity] section head using the following format:
[connectivity]
sc=mem_read_1.s_out:increment_1.s_in

The inclusion of the optional <fifo_depth> value lets the v++ linker add a FIFO between the two kernels to help prevent stalls. This uses BRAM resources from the device when specified, but eliminates the need to update the HLS kernel to contain FIFOs. The tool also instantiates a Clock Converter (CDC) or Datawidth Converter (DWC) IP if the connections have different clocks, or different bus widths.

--connectivity.slr

--connectivity.slr <arg>

Use this option to assign a CU to a specific SLR on the device. The option must be repeated for each kernel or CU being assigned to an SLR.

IMPORTANT: If you use --connectivity.slr to assign the kernel placement, then you must also use --connectivity.sp to assign memory access for the kernel.

Valid values include:

<cu_name>:<SLR_NUM>

Where:

  • <cu_name> is the name of the compute unit as specified in the --connectivity.nk option. Generally this is <kernel_name>_1 unless a different name was specified.
  • <SLR_NUM> is the SLR number to assign the CU to. For example, SLR0, SLR1.

For example, to assign CU vadd_2 to SLR2, and CU fft_1 to SLR1, use the following:

v++ --link --connectivity.slr vadd_2:SLR2 --connectivity.slr fft_1:SLR1
TIP: This option can be specified in a configuration file under the [connectivity] section head using the following format:
[connectivity]
slr=vadd_2:SLR2
slr=fft_1:SLR1

--connectivity.sp

--connectivity.sp <arg>

Use this option to specify the assignment of kernel arguments to system ports within the platform. A primary use case for this option is to connect kernel arguments to specific memory resources. A separate --connectivity.sp option is required to map each argument of a kernel to a particular memory resource. Any argument not explicitly mapped to a memory resource through the --connectivity.sp option is automatically connected to an available memory resource during the build process.

Note: Xilinx recommends specifying argument names when using the --connectivity.sp option as this provides the greatest connection flexibility. However, you can also specify kernel interface ports with this option.

Valid values include:

<cu_name>.<kernel_argument_name>:<sptag[min:max]>

Where:

  • <cu_name> is the name of the compute unit as specified in the --connectivity.nk option. Generally this is <kernel_name>_1 unless a different name was specified.
  • <kernel_argument_name> is the name of the function argument for the kernel, or the compute unit interface port.
  • <sptag> represents a system port tag, such as for memory controller interface names from the target platform. Valid <sptag> names include DDR, PLRAM, and HBM.
  • [min:max] enables the use of a range of memory, such as DDR[0:2]. A single index is also supported: DDR[2].
TIP: The supported <sptag> and range of memory resources for a target platform can be obtained using the platforminfo command. Refer to platforminfo Utility for more information.

The following example maps the input argument (A) for the specified CU of the VADD kernel to DDR[0:3], input argument (B) to HBM[0:31], and writes the output argument (C) to PLRAM[2]:

v++ --link --connectivity.sp vadd_1.A:DDR[0:3] --connectivity.sp vadd_1.B:HBM[0:31] \
--connectivity.sp vadd_1.C:PLRAM[2]
TIP: This option can be specified in a configuration file under the [connectivity] section head using the following format:
[connectivity]
sp=vadd_1.A:DDR[0:3]
sp=vadd_1.B:HBM[0:31]
sp=vadd_1.C:PLRAM[2]

--connectivity.connect

--connectivity.connect <X:Y>

This option can be used to make connections through the Vivado IP integrator, but v++ does not perform any error checking on the specified connections. Use this to specify general connections between kernels and non-AXI elements of the target platform, such as connections to GT ports.

The X and Y connections must be specified as arguments compatible with either the IP integrator connect_bd_net or connect_bd_intf_net commands. The specific format of <X:Y> is:
src/hierarchy_name/cell_name/pin_name:dst/hierarchy_name/cell_name/pin_name

These cannot include connections between AXI4-Stream interfaces which require the use of --conectivity.sc, or M_AXI interfaces which require the use of --connectivity.sp as described above.

TIP: This option can be specified in a configuration file under the [connectivity] section head using the following format:
[connectivity]
connect=<X:Y>

--debug Options

This option enables debug IP core insertion in the device binary (.xclbin) for hardware debugging. This option lets you specify the type of debug core to add, and which compute unit and interfaces to monitor with ChipScope™. The --debug.xxx options lets you attach AXI protocol checkers and System ILA cores at the interfaces to kernels or specific compute units (CUs) for debugging and performance monitoring purposes:

  • The System Integrated Logic Analyzer (ILA) provides transaction level visibility into an accelerated kernel or function running on hardware. AXI traffic of interest can also be captured and viewed using the System ILA core.
  • The AXI Protocol Checker debug core is designed to monitor AXI interfaces on the accelerated kernel. When attached to an interface of a CU, it actively checks for protocol violations and provides an indication of which violation occurred.

The --debug.xxx commands can be specified in a configuration file under the [debug] section head using the following format as an example:

[debug]
protocol=all:all           # Protocol analyzers on all CUs
protocol=cu2:port3         # Protocol analyzer on port3 of cu2
chipscope=cu2              # ILA on cu2

The various options of --debug include the following:

--debug.chipscope

--debug.chipscope <cu_name>[:<interface_name>]

Adds the System Integrated Logic Analyzer debug core to the specified CUs in the design.

IMPORTANT: The --debug.chipscope option requires the <cu_name> to be specified and does not accept the keyword all. You can optionally specify an <interface_name>.

For example, the following command adds an ILA core to the vadd_1 CU:

v++ --link --debug.chipscope vadd_1

--debug.list_ports

Shows a list of valid compute units and port combinations in the current design. This is informational to help you with crafting a command line or config file for the --debug command.

This option needs to be specified during linking, but does not run the linking process. The required elements of the command line are shown in the following example, which returns the available ports when linking the specified kernels with the listed platform:

v++ --platform <platform> --link --debug.list_ports <kernel.xo>

--debug.protocol

--debug.protocol all|<cu_name>[:<interface_name>]

Adds the AXI Protocol Checker debug core to the design. This can be specified with the keyword all, or the <cu_name> and optional <interface_name> to add the protocol checker to the specified CU and interface.

For example:

v++ --link --debug.protocol all

--hls Options

The --hls.XXX options described below are used to specify options for the Vitis HLS synthesis process invoked during kernel compilation.

--hls.clock

--hls.clock <arg>

Specifies a frequency in Hz at which the listed kernel(s) should be compiled by Vitis HLS.

Where <arg> is specified as: <frequency_in_Hz>:<cu_name1>,<cu_name2>,..,<cu_nameN>

  • <frequency_in_Hz>: Defines the kernel frequency specified in Hz.
  • <cu_name1>,<cu_name2>,...: Defines a list of kernels or kernel instances (CUs) to be compiled at the specified target frequency.

For example:

v++ -c --hls.clock 300000000:mmult,mmadd --hls.clock 100000000:fifo_1
TIP: This option can be specified in a configuration file under the [hls] section head using the following format:
[hls]
clock=300000000:mmult,mmadd
clock=100000000:fifo_1

--hls.export_mode

--hls.export_mode <file_type>:<file_path>

Specifies the RTL export mode for Vitis HLS and the path and name of the exported file. As a v++ compiler option, the only supported <file_type> is XO.

For example:

v++ --hls.export_mode xo:./kernel.xo
TIP: This option can be specified in a configuration file under the [hls] section head using the following format:
[hls]
export_mode=xo:./kernel.xo

--hls.export_project

--hls.export_project <arg>

Specifies a directory where the Vitis HLS project setup script is exported.

For example:

v++ --hls.export_project ./hls_export
TIP: This option can be specified in a configuration file under the [hls] section head using the following format:
[hls]
export_project=./hls_export

--hls.jobs

--hls.jobs <arg>

Specifies the number of jobs for launching HLS runs.

This option specifies the number of parallel jobs Vitis HLS uses to synthesize the RTL kernel code. Increasing the number of jobs allows the tool to spawn more parallel processes and complete faster.

For example:

v++ --hls.jobs 4
TIP: This option can be specified in a configuration file under the [hls] section head using the following format:
[hls]
jobs=4

--hls.lsf

--hls.lsf <arg>

Specifies a bsub command to submit a job to LSF for HLS runs.

Specifies the bsub command line as a string to pass to an LSF cluster. This option is required to use the IBM Platform Load Sharing Facility (LSF) for Vitis HLS synthesis.

For example:

v++ --compile --hls.lsf '{bsub -R \"select[type=X86_64]\" -N -q medium}'
TIP: This option can be specified in a configuration file under the [hls] section head using the following format:
[hls]
lsf='{bsub...

--hls.post_tcl

--hls.post_tcl <arg>

Specifies a Tcl file containing Tcl commands for vitis_hls to source after csynth_design.

For example:

v++ --hls.post_tcl ./runPost.tcl
TIP: This option can be specified in a configuration file under the [hls] section head using the following format:
[hls]
post_tcl=./runPost.tcl

--hls.pre_tcl

--hls.pre_tcl <arg>

Specifies a Tcl file containing Tcl commands for vitis_hls to source before running csynth_design.

For example:

v++ --hls.pre_tcl ./runPre.tcl
Where runPre.tcl contains the following commands to configure m_axi interfaces in Vitis HLS:
config_interface -m_axi_auto_max_ports=1
config_interface -m_axi_max_bitwidth 512
TIP: This option can also be specified in a configuration file under the [hls] section head using the following format:
[hls]
pre_tcl=./runPre.tcl

--linkhook Options

The --linkhook.XXX options described below are used to specify Tcl scripts to run at specific steps during the Vitis linking process. Valid steps can be determined using the --linkhook.list_steps command as described below.

--linkhook.custom

--linkhook.custom <step name, path to script file>

Specifies a Tcl script to execute at a predefined point in the build process. The path to specify the script can be an absolute path, or partial path relative to the build directory.

For example, the following command runs the specified Tcl script before the SysLink step in the build:

v++ -l --linkhook.custom preSysLink,./runScript.tcl

-linkhook.do_first

--linkhook.do_first <step name, path to script file>

Specifies a Tcl script to execute before the given step name. The path to specify the script can be an absolute path, or partial path relative to the build directory.

For example, the following command runs the specified Tcl script before the place_design step in the build:

v++ -l --linkhook.do_first vpl.impl.place_design,runScript.tcl

-linkhook.do_last

--linkhook.do_last <step name, path to script file>

Specifies a Tcl script to execute immediately after the given step completes. The path to specify the script can be an absolute path, or partial path relative to the build directory.

For example, the following command runs the specified Tcl script after the place_design step in the build:

v++ -l --linkhook.do_last vpl.impl.place_design,runScript.tcl

-linkhook.list_steps

--linkhook.list_steps

List default and optional build steps that support script hooks for a specified target. This command requires the --target to be specified as well as the --link option.

For example:

v++ --target hw -l --linkhook.list_steps

The command returns both default steps that are always enabled during the build process, and optional steps that you can enable if needed. Refer to Managing Vivado Synthesis and Implementation Results for directions on enabling optional steps.

--package Options

Introduction

The v++ --package, or -p step, packages the final product at the end of the v++ compile and link build process. This is a required step for all embedded platforms, including Versal devices, AI Engine, and Zynq devices.

The various options of --package include the following:

--package.aie_debug_port

--package.aie_debug_port <arg>

Where <arg> specifies a TCP port where emulator listens for incoming connections from the debugger to debug Versal AI Engine cores.

For example:

v++ -l --package.aie_debug_port 1440 

--package.bl31_elf

--package.bl31_elf <arg>

Where <arg> specifies the absolute or relative path to Arm trusted FW ELF that executes on A72 #0 core.

For example:

v++ -l --package.bl31_elf ./arm_trusted.elf 

--package.boot_mode

--package.boot_mode <arg>
Where <arg> specifies <ospi | qspi | sd> Boot mode used for running the application in emulation or on hardware.
TIP: ospi is for use with Versal Data Center platforms only.

For example:

v++ -l --package.boot_mode sd 

--package.defer_aie_run

--package.defer_aie_run

Where this option specifies that the Versal AI Engine cores are enabled by an embedded processor (PS) application. When not specified, the tool generates CDO commands to enable the AI Engine cores during PDI load instead.

For example:

v++ -l --package.defer_aie_run

--package.domain

--package.domain <arg>

Where <arg> specifies a domain name.

For example:

v++ -l --package.domain xrt

--package.dtb

--package.dtb <arg>

Where <arg> specifies the absolute or relative path to device tree binary (DTB) used for loading Linux on the APU.

For example:

v++ -l --package.dtb ./device_tree.image

--package.enable_aie_debug

--package.enable_aie_debug

When enabled, the tool generates CDO commands to stop the AI Engine cores during PDI load.

For example:

v++ -l --package.enable_aie_debug 

--package.image_format

--package.image_format <arg>

Where <arg> specifies <ext4 | fat32> output image file format used on the SD card.

  • ext4: Linux file system
  • fat32: Windows file system
IMPORTANT: EXT4 format is not supported on Windows.

For example:

v++ -l --package.image_format fat32 

--package.kernel_image

--package.kernel_image <arg>

Where <arg> specifies the absolute or relative path to a Linux kernel image file. Overrides the existing image available in the platform. The platform image file is available for download from xilinx.com. Refer to the Vitis Software Platform Installation for more information.

For example:

v++ -l --package.kernel_image ./kernel_image 

--package.no_image

--package.no_image

Bypass SD card image creation. Valid for --package.boot_mode sd.

--package.out_dir

--package.out_dir <arg>

Where <arg> specifies the absolute or relative path to the output directory of the --package command.

For example:

v++ -l --package.out_dir ./out_dir 

--package.ps_debug_port

--package.ps_debug_port <arg>

Where <arg> specifies the TCP port where emulator listens for incoming connections from the debugger to debug PS cores.

For example:

v++ -l --package.debug_port 3200 

--package.ps_elf

--package.ps_elf <arg>

Where <arg> specifies <path_to_elf_file,core>.

  • path_to_elf_file: Specifies the ELF file for the PS core.
  • core: Indicates the PS core it should run on.

Used when a baremetal ELF file is running on a device processor core. This option specifies an ELF file and processor core pair to be included in the boot image. The available processors for supported devices are listed below:

  • Versal processor core values include: a72-0, a72-1, a72-2, and a72-3.
  • Zynq UltraScale+ MPSoC processor core values include: a53-0, a53-1, a53-2, a53-3, r5-0, and r5-1.
  • Zynq-7000 processor core values include: a9-0 and a9-1.
TIP: Specify the option separately for each ELF/Core pair.

For example:

v++ -l --package.ps_elf a53_0.elf,a53-0 --package.ps_elf r5_0.elf,r5-0

--package.rootfs

--package.rootfs <arg>

Where <arg> specifies the absolute or relative path to a processed Linux root file system file. The platform RootFS file is available for download from Xilinx.com. Refer to the Vitis Software Platform Installation for more information.

For example:

v++ -l --package.rootfs ./rootfs.ext4

--package.sd_dir

--package.sd_dir <arg>

Where <arg> specifies a folder to package into the sd_card directory/image. The contents of the directory are copied to a sub-folder of the sd_card folder.

For example:

v++ -l --package.sd_dir ./test_data 

--package.sd_file

--package.sd_file <arg>

Where <arg> specifies an ELF or other data file to package into the sd_card directory/image. This option can be used repeatedly to specify multiple files to add to the sd_card.

For example:

v++ -l --package.sd_file ./arm_trusted.elf 

--package.uboot

--package.uboot <arg>

Where <arg> specifies a path to U-boot ELF file which overrides a platform U-boot.

For example:

v++ -l --package.uboot ./uboot.elf 

--profile Options

As discussed in Enabling Profiling in Your Application, there are a number of --profile options that let you enable profiling of the application and kernel events during runtime execution. This option enables capturing profile data for data traffic between the kernel and host, kernel stalls, the execution times of kernels and compute units (CUs), as well as monitoring activity in Versal AI Engines.

IMPORTANT: Using the --profile option in v++ also requires the addition of the profile=true statement to the xrt.ini file. Refer to xrt.ini File.
The --profile commands can be specified in a configuration file under the [profile] section head using the following format, for example:
[profile]
data=all:all:all           # Monitor data on all kernels and CUs
data=k1:all:all            # Monitor data on all instances of kernel k1
data=k1:cu2:port3          # Specific CU master
data=k1:cu2:port3:counters # Specific CU master (counters only, no trace)
stall=all:all              # Monitor stalls for all CUs of all kernels
stall=k1:cu2               # Stalls only for cu2
exec=all:all               # Monitor execution times for all CUs
exec=k1:cu2                # Execution tims only for cu2
aie=all                    # Monitor all AIE streams
aie=DataIn1                # Monitor the specific input stream in the SDF graph
aie=M02_AXIS               # Monitor specific stream interface

The various options of the command are described below:

--profile.aie <arg>

Enables profiling of AI Engine streams in adaptive data flow (ADF) applications, where <arg> is:

<ADF_graph_argument|pin name|all>
  • <ADF_graph_argument>: Specifies an argument name from the ADF graph application.
  • <pin_name>: Indicates a port on an AI Engine kernel.
  • <all>: Indicates monitoring all stream connections in the ADF application.
For example, to monitor the DataIn1 input stream use the following command:
v++ --link --profile.aie:DataIn1

--profile.data <arg>

Enables monitoring of data ports through the monitor IPs. This option needs to be specified during linking.

Where <arg> is:

[<kernel_name>|all]:[<cu_name>|all]:[<interface_name>|all](:[counters|all])
  • [<kernel_name>|all] defines either a specific kernel to apply the command to. However, you can also specify the keyword all to apply the monitoring to all existing kernels, compute units, and interfaces with a single option.
  • [<cu_name>|all] when <kernel_name> has been specified, you can also define a specific CU to apply the command to, or indicate that it should be applied to all CUs for the kernel.
  • [<interface_name>|all] defines the specific interface on the kernel or CU to monitor for data activity, or monitor all interfaces.
  • [<counters|all] is an optional argument, as it defaults to all when not specified. It allows you to restrict the information gathering to just counters for larger designs, while all will include the collection of actual trace information.

For example, to assign the data profile to all CUs and interfaces of kernel k1 use the following command:

v++ --link --profile.data:k1:all:all

--profile.exec <arg>

This option records the execution times of the kernel and provides minimum port data collection during the system run. This option needs to be specified during linking.

TIP: The execution time of a kernel is collected by default when --profile.data or --profile.stall is specified. You can specify --profile.exec for any CUs not covered by data or stall.

The syntax for exec profiling is:

[<kernel_name>|all]:[<cu_name>|all](:[counters|all])

For example, to profile to execution of cu2 for kernel k1 use the following command:

v++ --link --profile.exec:k1:cu2

--profile.stall

Adds stall monitoring logic to the device binary (.xclbin) which requires the addition of stall ports on the kernel interface. To facilitate this, the stall option must be specified during both compilation and linking.

The syntax for stall profiling is:

[<kernel_name>|all]:[<cu_name>|all](:[counters|all])

For example, to monitor stalls of cu2 for kernel k1 use the following command:

v++ --compile -k k1 --profile.stall ...
v++ --link --profile.stall:k1:cu2 ...

--profile.trace_memory

When building the hardware target (-t=hw), use this option to specify the type and amount of memory to use for capturing trace data. You can specify the argument as follows:

<FIFO>:<size>|<MEMORY>[<n>]

This argument specifies trace buffer memory type for profiling.

FIFO:<size>
Specified in KB. Default is FIFO:8K. The maximum is 4G.
Memory[<N>]
Specifies the type and number of memory resource on the platform. Memory resources for the target platform can be identified with the platforminfo command. Supported memory types include HBM, DDR, PLRAM, HP, ACP, MIG, and MC_NOC. For example, DDR[1].
IMPORTANT: Use with [Debug] trace_buffer_size in the xrt.ini file as described in xrt.ini File.

--vivado Options

The –-vivado.XXX options are used to configure the Vivado tools for synthesis and implementation of your device binary (.xclbin). For instance, you can specify the number of jobs to spawn, LSF commands to use for implementation runs, or the specific implementation strategies to use. You can also configure optimization, placement, timing, or specify which reports to output.

IMPORTANT: Familiarity with the Vivado Design Suite is required to make the best use of these options. See the Vivado Design Suite User Guide: Implementation (UG904) for more information.

--vivado.impl.jobs

--vivado.impl.jobs <arg>

Specifies the number of parallel jobs the Vivado Design Suite uses to implement the device binary. Increasing the number of jobs allows the Vivado implementation step to spawn more parallel processes and complete faster jobs.

For example:

v++ --link --vivado.impl.jobs 4

--vivado.impl.lsf

--vivado.impl.lsf <arg>

Specifies the bsub command line as a string to pass to an LSF cluster. This option is required to use the IBM Platform Load Sharing Facility (LSF) for Vivado implementation.

For example:

v++ --link --vivado.impl.lsf '{bsub -R \"select[type=X86_64]\" -N -q medium}'

--vivado.impl.strategies

--vivado.impl.strategies <arg>

Specifies a comma-separated list of strategy names for Vivado implementation runs. Use ALL to run all available implementation strategies. This lets you run a variety of implementation strategies at the same time during the build process and allows you to more quickly resolve the timing and routing issues of the design.

--vivado.param

--vivado.param <arg>

Specifies parameters for the Vivado Design Suite to be used during synthesis and implementation of the FPGA binary (xclbin).

--vivado.prop

--vivado.prop <arg>

Specifies properties for the Vivado Design Suite to be used during synthesis and implementation of the FPGA binary (xclbin).

Table 3. Prop Options
Property Name Valid Values Description
vivado.prop <object_type>.<object_name>.<prop_name> Type: Various This allows you to specify any property used in the Vivado hardware compilation flow.

<object_type> is run|fileset|file|project.

The <object_name> and <prop_name> values are described in Vivado Design Suite Properties Reference Guide (UG912).

Examples:
vivado.prop run.impl_1.
{STEPS.PLACE_DESIGN.ARGS.MORE 
OPTIONS}={-no_bufg_opt}
vivado.prop fileset.
current.top=foo

If <object_type> is set to file, current is not supported.

If <object type> is set to run, the special value of __KERNEL__ can be used to specify run optimization settings for ALL kernels, instead of the need to specify them one by one.

For example, from the command line:

v++ --link --vivado.prop run.impl_1.STEPS.PHYS_OPT_DESIGN.IS_ENABLED=true
--vivado.prop run.impl_1.STEPS.PHYS_OPT_DESIGN.ARGS.DIRECTIVE=Explore
--vivado.prop run.impl_1.STEPS.PLACE_DESIGN.TCL.PRE=/…/xxx.tcl
The example above enables the optional PHYS_OPT_DESIGN step as part of the Vivado implementation process, sets the Explore directive for that step, and specifies a Tcl script to run before the PLACE_DESIGN step.
TIP: As described in Managing Vivado Synthesis and Implementation Results, each step in the Vivado synthesis and implementation process can have a Tcl prescript to run before the step, and a Tcl postscript to run after the step. This lets you customize the build process by inserting pre-processing or post-processing Tcl commands around the different steps. These scripts can be specified as shown in the example above.

These options can also be specified in a configuration file under the [vivado] section head using the following format:

[vivado]
prop=run.impl_1.STEPS.PHYS_OPT_DESIGN.IS_ENABLED=true
prop=run.impl_1.STEPS.PHYS_OPT_DESIGN.ARGS.DIRECTIVE=Explore
prop=run.impl_1.STEPS.PLACE_DESIGN.TCL.PRE=/…/xxx.tcl
IMPORTANT: Some Vivado properties have spaces in their name, such as MORE OPTIONS and Tcl syntax requires these properties to be enclosed in braces, {}. However, the placement of the braces in the --vivado options is important. You must surround the complete property name with braces, rather than just a portion of it. For instance, the correct placement would be:
--vivado_prop run.impl_1.{STEPS.PLACE_DESIGN.ARGS.MORE OPTIONS}={-no_bufg_opt}
While the following would result in an error during the build process:
--vivado_prop run.impl_1.STEPS.PLACE_DESIGN.ARGS.{MORE OPTIONS}={-no_bufg_opt}

--vivado.synth.jobs

--vivado.synth.jobs <arg>

Specifies the number of parallel jobs the Vivado Design Suite uses to synthesize the device binary. Increasing the number of jobs allows the Vivado synthesis to spawn more parallel processes and complete faster jobs.

For example:

v++ --link --vivado.synth.jobs 4

--vivado.synth.lsf

--vivado.synth.lsf <arg>

Specifies the bsub command line as a string to pass to an LSF cluster. This option is required to use the IBM Platform Load Sharing Facility (LSF) for Vivado synthesis.

For example:

v++ --link --vivado.synth.lsf '{bsub -R \"select[type=X86_64]\" -N -q medium}'

Vitis Compiler Configuration File

A configuration file can also be used to specify the Vitis compiler options. A configuration file provides an organized way of passing options to the compiler by grouping similar switches together, and minimizing the length of the v++ command line. Some of the features that can be controlled through config file entries include:

  • HLS options to configure kernel compilation
  • Connectivity directives for system linking such as the number of kernels to instantiate or the assignment of kernel ports to global memory
  • Directives for the Vivado Design Suite to manage hardware synthesis and implementation.

In general, any v++ command option can be specified in a configuration file. However, the configuration file supports defining sections containing groups of related commands to help manage build options and strategies. The following table lists the defined sections.

Table 4. Section Tags of the Configuration File
Section Name Compiler/Linker Description
[advanced] either --advanced Options:
  • misc
  • param
  • prop
[clock] compiler --clock Options:
  • defaultFreqHz
  • defaultID
  • defaultTolerance
  • freqHz
  • id
  • tolerance
[connectivity] linker --connectivity Options:
  • nk
  • sc
  • slr
  • sp
  • connect
[debug] linker --debug Options
  • chipscope
  • list_ports
  • protocol
[hls] compiler HLS directives --hls Options:
  • clock
  • export_mode
  • export_project
  • jobs
  • lsf
  • post_tcl
  • pre_tcl
[linkhook] linker --linkhook Options
  • custom
  • do_first
  • do_last
  • list_steps
[package] packager --package Options
  • aie_debug_port
  • bl31_elf
  • boot_mode
  • defer_aie_run
  • domain
  • dtb
  • enable_aie_debug
  • image_format
  • kernel_image
  • no_image
  • out_dir
  • ps_debug_port
  • ps_elf
  • rootfs
  • sd_dir
  • sd_file
  • uboot
[profile] linker --profile Options
  • aie
  • data
  • exec
  • stall
  • trace_memory
[vivado] linker --vivado Options:
  • impl.jobs
  • impl.lsf
  • impl.strategies
  • param
  • prop
  • synth.jobs
  • synth.lsf
TIP: Comments can be added to the configuration file by starting the line with a "#". The end of a section is specified by an empty line at the end of the section.

Because the v++ command supports multiple config files on a single v++ command line, you can partition your configuration files into related options that define compilation and linking strategies or Vivado implementation strategies, and apply multiple config files during the build process.

Configuration files are optional. There are no naming restrictions on the files and the number of configuration files can be zero or more. All v++ options can be put in a single configuration file if desired. However, grouping related switches into separate files can help you organize your build strategy. For example, group [connectivity] related switches in one file, and [Vivado] options into a separate file.

The configuration file is specified through the use of the v++ --config option as discussed in the Vitis Compiler General Options. An example of the --config option follows:

v++ --config ../src/connectivity.cfg

Switches are read in the order they are encountered. If the same switch is repeated with conflicting information, the first switch read is used. The order of precedence for switches is as follows, where item one takes highest precedence:

  1. Command line switches.
  2. Config files (on command line) from left-to-right.
  3. Within a config file, precedence is from top-to-bottom.

Using the Message Rule File

The v++ command executes various Xilinx tools during kernel compilation and linking. These tools generate many messages that provide build status to you. These messages might or might not be relevant to you depending on your focus and design phase. The Message Rule file (.mrf) can be used to better manage these messages. It provides commands to promote important messages to the terminal or suppress unimportant ones. This helps you better understand the kernel build result and explore methods to optimize the kernel.

The Message Rule file is a text file consisting of comments and supported commands. Only one command is allowed on each line.

Comment

Any line with “#” as the first non-white space character is a comment.

Supported Commands

By default, v++ recursively scans the entire working directory and promotes all error messages to the v++ output. The promote and suppress commands below provide more control on the v++ output.

  • promote: This command indicates that matching messages should be promoted to the v++ output.
  • suppress: This command indicates that matching messages should be suppressed or filtered from the v++ output. Note that errors cannot be suppressed.

Enter only one command per line.

Command Options

The Message Rule file can have multiple promote and suppress commands. Each command can have one and only one of the options below. The options are case-sensitive.

  • -id [<message_id>]: All messages matching the specified message ID are promoted or suppressed. The message ID is in format of nnn-mmm. As an example, the following is a warning message from HLS. The message ID in this case is 204-68.
    WARNING: [V++ 204-68] Unable to enforce a carried dependence constraint (II = 1, distance = 1, offset = 1) 
    between bus request on port 'gmem' 
    (/matrix_multiply_cl_kernel/mmult1.cl:57) and bus request on port 'gmem'-severity [severity_level]

    For example, to suppress messages with message ID 204-68, specify the following: suppress -id 204-68.

  • -severity [<severity_level>]: The following are valid values for the severity level. All messages matching the specified severity level will be promoted or suppressed.
    • info
    • warning
    • critical_warning

      For example, to promote messages with severity of 'critical-warning', specify the following: promote -severity critical_warning.

Precedence of Message Rules

The suppress rules take precedence over promote rules. If the same message ID or severity level is passed to both promote and suppress commands in the Message Rule file, the matching messages are suppressed and not displayed.

Example of Message Rule File

The following is an example of a valid Message Rule file:

# promote all warning, critical warning
promote -severity warning
promote -severity critical_warning
# suppress the critical warning message with id 19-2342
suppress -id 19-2342