Vitis Compiler Command
This section describes the Vitis
compiler command, v++
, and the various options it supports for both
compiling and linking FPGA binary.
The Vitis compiler is a standalone command line utility for both compiling kernel accelerator functions into Xilinx object (.xo) files, and linking them with other .xo files and supported platforms to build an FPGA binary.
For additional information about the use of the v++
command options for compile, link, packaging, and general processes, see
these additional sections:
Vitis Compiler General Options
The Vitis compiler supports many options for both the compilation process and the linking process. These options provide a range of features, and some apply specifically to compile or link, while others can be used, or are required for both compile and link.
--config
option, as discussed in the Vitis Compiler Configuration File. For
example, the --platform
option can be specified
in a configuration file without a section head using the following syntax:
platform=xilinx_u200_xdma_201830_2
--board_connection
- Applies to
- Compile and link
--board_connection
Specifies a dual in-line memory module (DIMM) board file for each DIMM connector slot. The board is specified using the Vendor:Board:Name:Version (vbnv) attribute of the DIMM card as it appears in the board repository.
For example:
<DIMM_connector>:<vbnv_of_DIMM_board>
-c | --compile
- Applies to
- Compile
--compile
Required for compilation, but mutually exclusive with --link
. Run v++ -c
to generate .xo files from kernel source files.
--config
- Applies to
- Compile and link
--config <config_file> ...
Specifies a configuration file containing v++
switches. The configuration file can be used to capture
compilation or linking strategies, that can be easily reused by referring to the
config file on the v++
command line. In addition,
the config file allows the v++
command line to be
shortened to include only the options that are not specified in the config file.
Refer to the Vitis Compiler Configuration File for more
information.
Multiple configuration files can be specified on the v++
command line. A separate --config
switch is required for each file used. For example:
v++ -l --config cfg_connectivity.txt --config cfg_vivado.txt ...
--custom_script
- Applies to
- Compile and link
--custom_script <kernel_name>:<file_name>
This option lets you specify custom Tcl scripts to be used in the
build process during compilation or linking. Use with the --export_script
option to create, edit, and run the scripts to
customize the build process.
When used with the v++ --compile
command, this option lets you specify a custom HLS script to be used when compiling
the specified kernel. The script lets you modify or customize the Vitis HLS tool. Use the --export_script
option to extract a Tcl script Vitis HLS uses to compile the kernel, modify the
script as needed, and resubmit using the --custom_script
option to better manage the kernel build process.
v++ -c -k kernel1 -export_script ...
*** Modify the exported script to customize in some way, then resubmit. ****
v++ -c --custom_script kernel1:./kernel1.tcl ...
When used with the v++ --link
command for the hardware build target (-t hw
),
this option lets you specify the absolute path to an edited run_script_map.dat file, This file contains a list
of steps in the build process, and Tcl scripts that are run by the Vitis and Vivado tools during those steps. You can edit run_script_map.dat to specify custom Tcl scripts to
run at those steps in the build process. You must use the following steps to
customize the Tcl scripts:
- Run the build process specifying the
--export_script
option as follows:v++ -t hw -l -k kernel1 -export_script ...
- Copy the Tcl scripts referenced in the run_script_map.dat file for any of the steps you want to customize. For example, copy the Tcl file specified for the synthesis run, or the implementation run. You must copy the file to a separate location, outside of the project build structure.
- Edit the Tcl script to add or modify any of the existing commands to create a new custom Tcl script.
- Edit the run_script_map.dat file to point a specific implementation step to the new custom script.
- Relaunch the build process using the
--custom_script
option, specifying the absolute path to the run_script_map.dat file as shown below:v++ -t hw -l -k kernel1 -custom_script /path/to/run_script_map.dat
read_xdc dont_touch.xdc
set_property used_in_implementation false [get_files dont_touch.xdc]
The synthesis run will return an error related to a missing dont_touch.xdc file if this is not done.
-D | --define
- Applies to
- Compile and link
--define <arg>
Valid macro name and definition pair: <name>=<definition>
.
Predefine name as a macro with definition. This option is passed to
the v++
pre-processor.
--dk
- Applies to
- Compile and link
--dk <arg>
This option enables debug IP core insertion in the FPGA binary
(.xclbin) for hardware debugging. This
option lets you specify the type of debug core to add, and which compute unit and
interfaces to monitor with ChipScope™. The
--dk
option allows you to attach AXI protocol
checkers and System ILA cores at the interfaces to the kernels for debugging and
performance monitoring purposes.
The System Integrated Logic Analyzer (ILA) provides transaction level visibility into an accelerated kernel or function running on hardware. AXI traffic of interest can also be captured and viewed using the System ILA core.
The AXI Protocol Checker debug core is designed to monitor AXI interfaces on the accelerated kernel. When attached to an interface of a CU, it actively checks for protocol violations and provides an indication of which violation occurred.
Valid values for <arg>
include:
[protocol|chipscope|list_ports]:<cu_name>:<interface_name>
Where:
protocol
adds the AXI Protocol Checker debug core to the design. Can be specified with the keywordall
, or the<cu_name>:<interface_name>
.chipscope
adds the System Integrated Logic Analyzer debug core to the design. Thechipscope
option can not accept the keywordall
, and requires the<cu_name>
to be specified, and optionally the<interface_name>
.list_ports
shows a list of valid compute units and port combinations in the current design. This is informational to help you with crafting the command line or config file.<cu_name>
specifies the compute unit to apply the--dk
option to.<interface_name>
is optional. If not specified, all ports on the specified CU are expected to be analyzed.
For example:
v++ --link --dk chipscope:vadd_1
--export_script
- Applies to
- Compile and link
--export_script
This option runs the build process up to the point of exporting a
script file, or list of script files, and then stops execution. The build process
must be completed using the --custom_script
option. This lets you edit the exported script, or list of scripts, and then rerun
the build using your custom scripts.
When used with the v++ --compile
command, this option exports a Tcl script for the specified kernel, <kernel_name>.tcl, that can be used to execute
Vitis HLS, but stops the build process
before actually launching the HLS tool. This lets you interrupt the build process to
edit the generated Tcl script, and then restart the build process using the --custom_script
option, as shown in the following
example:
v++ -c -k kernel1 -export_script ...
–t sw_emu
)
of OpenCL kernels.When used with the v++ --link
command for the hardware build target (-t hw
),
this option exports a run_script_map.dat file
in the current directory. This file contains a list of steps in the build process,
and Tcl scripts that are run by the Vitis and
Vivado tools during those steps. You can
edit the specified Tcl scripts, customizing the build process in those scripts, and
relaunch the build using the --custom_script
option. Export the run_script_map.dat file
using the following command:
v++ -t hw -l -k kernel1 -export_script ...
--from_step
- Applies to
- Compile and link
--from_step <arg>
Specifies a step name for the Vitis compiler build process, to start the build process from that
step. If intermediate results are available, the link process will fast forward and
begin execution at the named step if possible. This allows you to run the build
through a --to_step
, and then resume the build
process at the --from_step
, after interacting
with your project in some method. You can use the --list_step
option to determine the list of valid steps.
--from_step
and --to_step
options are incremental build options that require you to
use the same project directory when launching the Vitis compiler using --from_step
to resume the build as you specified when using --to_step
to start the build. v++ --link --from_step vpl.update_bd
-g | --debug
- Applies to
- Compile and link
-g
Generates code for debugging the kernel. Using this option adds features to facilitate debugging the kernel as it is compiled and the FPGA binary is built.
For example:
v++ -g ...
-h | --help
-h
Prints the help contents for the v++
command. For example:
v++ -h
-I | --include
- Applies to
- Compile and link
--include <arg>
Add the specified directory to the list of directories to be searched for header files. This option is passed to the Vitis compiler pre-processor.
<input_file>
- Applies to
- Compile and link
<input_file1> <input_file2> ...
Specifies an OpenCL or C/C++
kernel source file for v++
compilation, or
Xilinx object files (.xo) for v++
linking.
For example:
v++ -l kernel1.xo kernelRTL.xo ...
--interactive
- Applies to
- Compile and link
--interactive [ impl ]
v++
configures necessary
environment and launches the Vivado tool with
the implementation project.
Because you are interactively launching the Vivado tool, the linking process is stopped at the
vpl
step, which is the equivalent of using
the --to_step vpl
option in your v++
command. When you are done using the Vivado tool, and you save the design checkpoint
(DCP), you can rerun the inking command using the -from_step
to pick the command up at the vpl
process.
For example:
v++ --interactive impl
-j | --jobs
- Applies to
- Compile and link
--jobs <arg>
Valid values specify a number of parallel jobs.
This option specifies the number of parallel jobs the Vivado Design Suite uses to implement the FPGA binary. Increasing the number of jobs allows the Vivado implementation step to spawn more parallel processes and complete faster.
For example:
v++ --link --jobs 4
-k | --kernel
- Applies to
- Compile and link
--kernel <arg>
Compile only the specified kernel from the input file. Only one
-k
option is allowed per v++
command. Valid values include the name of the
kernel to be compiled from the input .cl or
.c/.cpp kernel source code.
This is required for C/C++ kernels, but is optional for OpenCL kernels. OpenCL uses the kernel
keyword to
identify a kernel. For C/C++ kernels, you must identify the kernel by -k
or --kernel
.
When an OpenCL source file is
compiled without the -k
option, all the kernels in
the file are compiled. Use -k
to target a
specific kernel.
For example:
v++ -c --kernel vadd
--kernel_frequency
- Applies to
- Compile and link
--kernel_frequency <clockID>:<freq>|<clockID>:<freq>
Specifies a user-defined clock frequency (in MHz) for the kernel,
overriding the default clock frequency defined on the hardware platform. The <freq>
specifies a single frequency for kernels
with only a single clock, or can be used to specify the <clockID> and the
<freq> for kernels that support two clocks.
The syntax for overriding the clock on a platform with only one kernel clock, is to simply specify the frequency in MHz:
v++ --kernel_frequency 300
To override a specific clock on a platform with two clocks, specify the clock ID and frequency:
v++ --kernel_frequency 0:300
To override both clocks on a multi-clock platform, specify each clock ID and the corresponding frequency. For example:
v++ --kernel_frequency 0:300|1:500
-l | --link
--link
This is a required option for the linking process, which follows
compilation, but is mutually exclusive with --compile
. Run v++
in link mode to
link .xo input files and generate an .xclbin output file.
--list_steps
- Applies to
- Compile and link
--list_steps
List valid run steps for a given target. This option returns a list
of steps that can be used in the --from_step
or
--to_step
options. The command must be
specified with the following options:
-t | --target [sw_emu | hw_emu | hw ]
:[ --compile | --link ]
: Specifies the list of steps from either the compile or link process for the specified build target.
For example:
v++ -t hw_emu --link --list_steps
--log_dir
- Applies to
- Compile and link
--log_dir <dir_name>
Specifies a directory to store log files into. If --log_dir
is not specified, the tool saves the log
files to ./_x/logs. Refer to Output Directories from the v++ Command for more information.
For example:
v++ --log_dir /tmp/myProj_logs ...
--lsf
- Applies to
- Compile and link
--lsf <arg>
Specifies the bsub
command line
as a string to pass to an LSF cluster. This option is required to use the IBM
Platform Load Sharing Facility (LSF) for Vivado implementation and synthesis.
For example:
v++ --link --lsf '{bsub -R \"select[type=X86_64]\" -N -q medium}'
--message_rules
- Applies to
- Compile and link
--message-rules <file_name>
Specifies a message rule file with rules for controlling messages. Refer to Using the Message Rule File for more information.
For example:
v++ --message_rules ./minimum_out.mrf ...
--no_ip_cache
- Applies to
- Compile and link
--no_ip_cache
Disables the IP cache for out-of-context (OOC) synthesis for Vivado Synthesis. Disabling the IP cache repository requires the tool to regenerate the IP synthesis results for every build, and can increase the build time. However, it also results in a clean build, eliminating earlier results for IP in the design.
For example:
v++ --no_ip_cache ...
-O | --optimize
- Applies to
- Compile and link
--optimize <arg>
This option specifies the optimization level of the Vivado implementation results. Valid optimization values include the following:
0
: Default optimization. Reduces compilation time and makes debugging produce the expected results.1
: Optimizes to reduce power consumption. This takes more time to build the design.2
: Optimizes to increase kernel speed. This option increases build time, but also improves the performance of the generated kernel.3
: This optimization provides the highest level performance in the generated code, but compilation time can increase considerably.s
: Optimizes for size. This reduces the logic resources of the device used by the kernel.quick
: Reduces Vivado implementation time, but can reduce kernel performance, and increases the resources used by the kernel.
For example:
v++ --link --optimize 2
-o | --output
- Applies to
- Compile and link
-o <output_name>
Specifies the name of the output file generated by the v++
command. The compilation (-c
) process output name must end with the .xo suffix, for Xilinx object file. The linking (-l
) process output file must end with the .xclbin suffix, for Xilinx
executable binary.
For example:
v++ -o krnl_vadd.xo
If --o
or --output
are not specified, the output file names
will default to the following:
- a.o for compilation.
- a.xclbin for linking.
-f | --platform
- Applies to
- Compile and link
--platform <platform_name>
Specifies the name of a supported acceleration platform as
specified by the $PLATFORM_REPO_PATHS
environment
variable, or the full path to the platform .xpfm file. For a list of supported platforms for the release, see
the Vitis 2020.1 Software Platform Release Notes.
This is a required option for both compilation and linking, to
define the target Xilinx platform of the build
process. The --platform
option accepts either a
platform name, or the path to a platform file xpfm, using the full or relative path.
--platform
and -t
options specified when the .xo file is generated by compilation, must be the --platform
and -t
used during linking. For more information, see platforminfo Utility.For example:
v++ --platform xilinx_u200_xdma_201830_2 ...
--config
option. For example, the platform
option can be
specified in a configuration file without a section head using the following syntax:
platform=xilinx_u200_xdma_201830_2
--profile_kernel
- Applies to
- Compile and link
--profile_kernel <arg>
This option enables capturing profile data for data traffic between
the kernel and host, kernel stalls, and kernel execution times. There are three
distinct forms of --profile_kernel
:
data
: Enables monitoring of data ports through the monitor IPs. This option needs to be specified during linking.stall
: Includes stall monitoring logic in the FPGA binary. However, it requires the addition of stall ports on the kernel interface. To facilitate this, thestall
option is required for both compilation and linking.exec
: This option records the execution times of the kernel and provides minimum port data collection during the system run. The execution time of the kernel is also collected by default fordata
orstall
data collection. This option needs to be specified during linking.
--profile_kernel
option in v++
also requires the addition of the profile=true
statement to the xrt.ini file. Refer to xrt.ini File.The syntax for data
profiling
is:
data:[ <kernel_name> | all ]:[ <cu_name> | all ]:[ <interface_name> | all ](:[ counters | all ])
The kernel_name
, cu_name
, and interface_name
can be specified to determine the specific interface
the performance monitor is applied to. However, you can also specify the keyword
all
to apply the monitoring to all existing
kernels, compute units, and interfaces with a single option.
The last option, <counters|all>
is not required, as it defaults to all
when not specified. It allows you to restrict the
information gathering to just counters
for larger
designs, while all
will include the collection of
actual trace information.
The syntax for stall
or exec
profiling is:
[ stall | exec ]:[ <kernel_name> | all ]:[ <cu_name> | all ](:[ counters | all ])
stall
or exec
, the
<interface_name>
field is not
used.The following example enables logging profile data
for all
interfaces, on all CUs for all kernels:
v++ -g -l --profile_kernel data:all:all:all ...
--profile_kernel
option is additive and can be used
multiple times to specify profiling for different kernels, CUs, and interfaces.
--remote_ip_cache
- Applies to
- Compile and link
--remote_ip_cache <dir_name>
Specifies the location of the remote IP cache directory for Vivado Synthesis to use during out-of-context (OOC) synthesis of IP. OOC synthesis lets the Vivado synthesis tool reuse synthesis results for IP that have not been changed in iterations of a design. This can reduce the time required to build your .xclbin files, due to reusing synthesis results.
When the --remote_ip_cache
option is not specified the IP cache is written to the current working directory
from which v++
was launched. You can use this
option to provide a different cache location, used across multiple projects for
instance.
For example:
v++ --remote_ip_cache /tmp/IP_cache_dir ...
--report_dir
- Applies to
- Compile and link
--report_dir <dir_name>
Specifies a directory to store report files into. If --report_dir
is not specified, the tool saves the
report files to ./_x/reports. Refer to Output Directories from the v++ Command for more information.
For example:
v++ --report_dir /tmp/myProj_reports ...
-R | --report_level
- Applies to
- Compile and link
--report_level <arg>
Valid report levels: 0
, 1
, 2
, estimate
.
These report levels have mappings kept in the optMap.xml file. You can override the installed optMap.xml to define custom report levels.
-R0
specification turns off all intermediate design checkpoint (DCP) generation during Vivado implementation. Turns on post-route timing report generation.- The
-R1
specification includes everything from-R0
, plusreport_failfast pre-opt_design
,report_failfast post-opt_design
, and enables all intermediate DCP generation. - The
-R2
specification includes everything from-R1
, plusreport_failfast post-route_design
. - The
-Restimate
specification forces Vitis HLS to generate a design.xml file if it does not exist and then generates a System Estimate report, as described in System Estimate Report.TIP: This option is useful for the software emulation build (-t sw_emu
), when design.xml is not generated by default.
For example:
v++ -R2 ...
--reuse_impl
--reuse_impl <arg>
- Applies to
- Link
Specifies the path and file name of an implemented design
checkpoint (DCP) file to use when generating the FPGA binary (xclbin) file. The link process uses the specified
implemented DCP to extract the FPGA bitstream and generates the xclbin. You can manually edit the Vivado project
created by a previously completed Vitis build, or specify the --to_step
option to interrupt the Vitis build process
and manually place and route a synthesized design, for instance. This allows you to
work interactively with Vivado Design Suite to
change the design and use DCP in the build process.
--reuse_impl
option is an incremental build
option that requires you to use the same project directory when resuming the
Vitis compiler with --reuse_impl
that you specified when using --to_step
to start the build. For example:
v++ --link --reuse_impl ./manual_design.dcp
-s | --save-temps
- Applies to
- Compile and link
--save-temps
Directs the v++
command to save
intermediate files/directories created during the compilation and link process. Use
the --temp_dir
option to specify a location to
write the intermediate files to.
For example:
v++ --save_temps ...
-t | --target
- Applies to
- Compile and link
-t [ sw_emu | hw_emu | hw ]
Specifies the build target, as described in Build Targets. The build target determines the results of the
compilation and linking processes. You can choose to build an emulation model for
debug and test, or build the actual system to run in hardware. The build target
defaults to hw
if -t
is not specified.
--platform
and -t
options specified when the .xo file is generated by compilation must be the --platform
and -t
used during linking.The valid values are:
sw_emu
: Software emulation.hw_emu
: Hardware emulation.hw
: Hardware.
For example:
v++ --link -t hw_emu
--temp_dir
- Applies to
- Compile and link
--temp_dir <dir_name>
This allows you to manage the location where the tool writes
temporary files created during the build process. The temporary results are written
by the v++
compiler, and then removed, unless the
--save-temps
option is also specified.
If --temp_dir
is not specified,
the tool saves the temporary files to ./_x/temp. Refer to Output Directories from the v++ Command for more information.
For example:
v++ --temp_dir /tmp/myProj_temp ...
--to_step
- Applies to
- Compile and link
--to_step <arg>
Specifies a step name, for either the compile or link process, to
run the build process through that step. You can use the --list_step
option to determine the list of valid compile or link
steps.
The build process will terminate after completing the named step.
At this time, you can interact with the build results. For example, manually
accessing the HLS project or the Vivado Design Suite project to perform specific tasks before returning
to the build flow, launch the v++
command with
the --from_step
option.
--to_step
and --from_step
options are incremental build options that require you to
use the same project directory when launching the Vitis compiler using --from_step
to resume the build as you specified when using --to_step
to start the build. You must also specify --save-temps
when using --to_step
to
preserve the temporary files required by the Vivado tools. For example:
v++ --link --save-temps --to_step vpl.update_bd
--trace_memory
- Applies to
- Compile and link
--trace_memory <arg>
Use with the --profile_kernel
option when linking with hardware target, to specify the type and amount of memory
to use for capturing trace data.
<FIFO>:<size>|<MEMORY>[<n>]
specifies trace
buffer memory type for profiling.
- FIFO:<size>: Specified in KB. Default is FIFO:8K. The maximum is 4G.
- Memory[<N>]: Specifies the type and number of memory
resource on the platform. Memory resources for the target platform can be
identified with the
platforminfo
command. Supported memory types include HBM, DDR, PLRAM, HP, ACP, MIG, and MC_NOC. For example, DDR[1].
--trace_memory
during the linking step, you
should also use the [Debug] trace_buffer_size
in
the xrt.ini file as described in xrt.ini File. -v | --version
-v
Prints the version and build information for the v++
command. For example:
v++ -v
--user_board_repo_paths
- Applies to
- Compile and link
--user_board_repo_paths
Specifies an existing user board repository for DIMM board files.
This value will be pre-pended to the board_part_repo_paths
property of the Vivado project.
--user_ip_repo_paths
- Applies to
- Compile and link
--user_ip_repo_paths <repo_dir>
Specifies the directory location of one or more user IP repository
paths to be searched first for IP used in the kernel design. This value is appended
to the start of the ip_repo_paths
used by the
Vivado tool to locate IP cores. IP
definitions from these specified paths are used ahead of IP repositories from the
hardware platform (.xsa) or from the Xilinx IP catalog.
--user_ip_repo_paths
can be specified on the v++
command line.The following lists show the priority order in which IP definitions are found during the build process, from high to low. Note that all of these entries can possibly include multiple directories in them.
- For the system hardware build (
-t hw
):- IP definitions from
--user_ip_repo_paths
. - Kernel IP definitions (
vpl --iprepo
switch value). - IP definitions from the IP repository associated with the platform.
- IP cache from the installation area (for example, <Install_Dir>/Vitis/2019.2/data/cache/).
- Xilinx IP catalog from the installation area (for example, <Install_Dir>/Vitis/2019.2/data/ip/)
- IP definitions from
- For the hardware emulation build (
-t hw_emu
):- IP definitions and User emulation IP repository from
--user_ip_repo_paths
. - Kernel IP definitions (
vpl --iprepo
switch value). - IP definitions from the IP repository associated with the platform.
- IP cache from the installation area (for example, <Install_Dir>/Vitis/2019.2/data/cache/).
$::env(XILINX_VITIS)/data/emulation/hw_em/ip_repo
$::env(XILINX_VIVADO)/data/emulation/hw_em/ip_repo
- Xilinx IP catalog from the installation area (for example, <Install_Dir>/Vitis/2019.2/data/ip/)
- IP definitions and User emulation IP repository from
For example:
v++ --user_ip_repo_paths ./myIP_repo ...
--advanced Options
The --advanced.param
and --advanced.prop
options specify parameters and
properties for use by the v++
command. When compiling
or linking, these options offer fine-grain control over the hardware generated by the
Vitis core development kit, and the hardware
emulation process.
The arguments for the --advanced.xxx
options are specified as <param_name>=<param_value>
. For example:
v++ --link -–advanced.param compiler.enableXSAIntegrityCheck=true
-–advanced.prop kernel.foo.kernel_flags="-std=c++0x"
--config
option,
as discussed in Vitis Compiler Configuration File. For example, the
--platform
option can be specified in a
configuration file without a section head using the following syntax:
platform=xilinx_u200_xdma_201830_2
--advanced.param
--advanced.param <param_name>=<param_value>
Specifies advanced parameters as described in the table below.
Parameter Name | Valid Values | Description |
---|---|---|
compiler.acceleratorBinaryContent |
Type: String Default Value:
|
Content to insert in xclbin . Valid options are bitstream and dcp . |
compiler.addOutputTypes |
Type: String Default Value:
|
Additional output types produced by the
Vitis compiler. Valid
values include: xclbin , sd_card , hw_export , and qspi . |
compiler.errorOnHoldViolation |
Type: Boolean Default Value: TRUE |
Error out if there is hold violation. |
compiler.fsanitize |
Type: String Default Value:
|
Enables additional memory access checks for OpenCL kernels as described in Debugging OpenCL Kernels. Valid values include: address, memory. |
compiler.interfaceRdBurstLen |
Type: Int Range Default
Value: |
Specifies the expected length of AXI read bursts
on the kernel AXI interface. This is used with option compiler.interfaceRdOutstanding to
determine the hardware buffer sizes. Values are 1 through
256. |
compiler.interfaceWrBurstLen |
Type: Int Range Default
Value: |
Specifies the expected length of AXI write
bursts on the kernel AXI interface. This is used with option compiler.interfaceWrOutstanding to
determine the hardware buffer sizes. Values are 1 through
256. |
compiler.interfaceRdOutstanding |
Type: Int Range Default
Value: |
Specifies how many outstanding reads to buffer are on the kernel AXI interface. Values are 1 through 256. |
compiler.interfaceWrOutstanding |
Type: Int Range Default
Value: |
Specifies how many outstanding writes to buffer are on the kernel AXI interface. Values are 1 through 256. |
compiler.maxComputeUnits |
Type: Int Default Value:
|
Maximum compute units allowed in the system.
Any positive value will overwrite the numComputeUnits setting in the hardware platform
(.xsa). The default value
of -1 preserves the setting in the platform. |
compiler.skipTimingCheckAndFrequencyScaling |
Type: Boolean Default Value: FALSE |
This parameter causes the Vivado tool to skip
the timing check and optional clock frequency scaling that occurs
after the last step of implementation process, which is either
route_design or post-route
phys_opt_design . |
compiler.userPreCreateProjectTcl |
Type: String Default Value:
|
Specifies a Tcl script to run before creating the Vivado project in the Vitis build process. |
compiler.userPreSysLinkOverlayTcl |
Type: String Default Value:
|
Specifies a Tcl script to run after opening the Vivado IP integrator block design, before running the compiler-generated dr.bd.tcl script in the Vitis build process. |
compiler.userPostSysLinkOverlayTcl |
Type: String Default Value:
|
Specifies a Tcl script to run after running the compiler-generated dr.bd.tcl script. |
compiler.userPostDebugProfileOverlayTcl |
Type: String Default Value:
|
Specifies a Tcl script to run after validating the Vivado IP integrator block design in the Vitis build process. |
compiler.worstNegativeSlack |
Type: Float Default Value: 0 |
Specifies the worst acceptable negative slack for the design, specified in nanoseconds (ns). When negative slack exceeds the specified value, the tool might try to scale the clock frequency to achieve timing results. |
compiler.xclDataflowFifoDepth |
Type: Int Default Value:
|
Specifies the depth of FIFOs used in kernel data flow region. |
hw_emu.compiledLibs |
Type: String Default Value:
|
Uses mentioned clibs for the specified simulator. |
hw_emu.debugMode |
gdb | wdb Default Value:
|
Compile time switch to reduce The default value is WDB and runs simulation in waveform mode. This option only works in combination with the
|
hw_emu.enableProtocolChecker |
Type: Boolean Default Value: FALSE |
Enables the lightweight AXI protocol checker (lapc) during HW emulation. This is used to confirm the accuracy of any AXI interfaces in the design. |
hw_emu.platformPath |
Type: String Default Value:
|
Specifies the path to the custom platform
directory. The <platformPath> directory should meet the
following requirements to be used in platform creation:
|
hw_emu.scDebugLevel |
none | waveform | log | waveform_and_log Default Value: waveform_and_log |
Sets the TLM transaction debug level of the
Vivado logic simulator
(xsim ).
|
hw_emu.simulator |
XSIM | QUESTA Default Value: XSIM |
Uses the specified simulator for the hardware emulation run. |
--advanced.param compiler.addOutputTypes="hw_export"
[advanced]
section head using the following format:
[advanced]
param=compiler.addOutputTypes="hw_export"
--advanced.prop
--advanced.prop <arg>
Specifies advanced kernel or solution properties for kernel compilation
where <arg>
is one of the values described
in the table below.
Property Name | Valid Values | Description |
---|---|---|
kernel.<kernel_name>.kernel_flags |
Type: String Default Value:
|
Sets specific compile flags on the kernel
<kernel_name> . |
solution.device_repo_path |
Type: String Default Value:
|
Specifies the path to a repository of hardware
platforms. The --platform option
with full path to the .xpfm
platform file should be used instead. |
solution.hls_pre_tcl |
Type: String Default Value:
|
Specifies the path to a Vitis HLS Tcl file, which is executed before the C code is synthesized. This allows Vitis HLS configuration settings to be applied prior to synthesis. |
solution.hls_post_tcl |
Type: String Default Value:
|
Specifies the path to a Vitis HLS Tcl file, which is executed after the C code is synthesized. |
solution.kernel_compiler_margin |
Type: Float Default Value: 12.5% of the kernel clock period. |
The clock margin (in ns) for the kernel. This value is subtracted from the kernel clock period prior to synthesis to provide some margin for place and route delays. |
--advanced.misc
--advanced.misc <arg>
Specifies advanced tool directives for kernel compilation.
--clock Options
--clock
options are only intended for use with embedded
processor platforms, and do not support Alveo data
center accelerator cards at this time. The --clock.XXX
options provide a
method for assigning clocks to kernels from the v++
command line and locating the required kernel clock frequency source during the linking
process. There are a number of options that can be used with increasing specificity. The
order of precedence is determined by how specific a clock option is. The rules are
listed in order from general to specific, where the specific rules take precedence over
the general rules:
- When no
--clock.XX
option is specified, the platform default clock will be applied. For 2-clock kernels, clock ID 0 will be assigned toap_clk
and clock ID 1 will be assigned toap_clk_2
. - Specifying
--clock.defaultId=<id>
defines a specific clock ID for all kernels, overriding the platform default clock. - Specifying
--clock.defaultFreq=<Hz>
defines a specific clock frequency for all kernels that overrides a user specified default clock ID, and the platform default clock. - Specifying
--clock.id=<id>:<cu>
assigns the specified clock ID to all clock pins on the specified CU, overriding user specified default frequency, ID, and the platform default clock. - Specifying
--clock.id=<id>:<cu>.<clk0>
assigns the specified clock ID to the specified clock pin on the specified CU. - Specifying
--clock.freqHz=<Hz>:<cu>
assigns the specified clock frequency to all clock pins on the specified CU. - Specifying
--clock.freqHz=<Hz>:<cu>.<clk0>
assigns the specified clock frequency to the specified clock pin on the specified CU.
--clock.defaultFreqHz
--clock.defaultFreqHz <arg>
Specifies a default clock frequency in Hz to use for all kernels.
This lets you override the default platform clock, and assign the clock with the
specified clock frequency as the default. Where <arg>
is specified as the clock frequency in Hz.
For example:
v++ --link --clock.defaultFreqHz 300000000
[clock]
section head using the following format:
[clock]
defaultFreqHz=300000000
--clock.defaultId
--clock.defaultId <arg>
--clock.defaultId=<id>
defines a specific clock ID for all
kernels, overriding the platform default clock. Where <arg>
is specified as the clock ID from one of the clocks
defined on the target platform, other than the default clock ID. platforminfo
utility as described in platforminfo Utility. For example:
v++ --link --clock.defaultId 1
[clock]
section head using the following format:
[clock]
defaultId=1
--clock.defaultTolerance
--clock.defaultTolerance <arg>
Specifies a default clock tolerance as a value, or as a percentage
of the default clock frequency. When specifying clock.defaultFreqHz
, you can also specify the tolerance with either a
value or percentage. This will update timing constraints to reflect the accepted
tolerance.
The tolerance value, <arg>, can be specified as a whole
number, indicating the clock.defaultFreqHz
± the
specified tolerance; or as a percentage of the default clock frequency specified as
a decimal value.
For example:
v++ --link --clock.defaultFreqHz 300000000 --clock.defaultTolerance 0.10
[clock]
section head using the following format:
[clock]
defaultTolerance=0.10
--clock.freqHz
--clock.freqHz <arg>
Specifies a clock frequency in Hz and assigns it to a list of
associated compute units (CUs) and optionally specific clock pins on the CU. Where
<arg>
is specified as <frequency_in_Hz>:<cu_0>[.<clk_pin_0>][,<cu_n>[.<clk_pin_n>]]
:
<frequency_in_Hz>
: Defines the clock frequency specified in Hz.<cu_0>[.<clk_pin_0>][,<cu_n>[.<clk_pin_n>]]
: Applies the defined frequency to the specified CUs, and optionally to the specified clock pin on the CU.
v++ --link --clock.freqHz 300000000:vadd_1,vadd_3
[clock]
section head using the following format:
[clock]
freqHz=300000000:vadd_1,vadd_3
--clock.id
--clock.id <arg>
Specifies an available clock ID from the target platform and
assigns it to a list of associated compute units (CUs) and optionally specific clock
pins on the CU. Where <arg>
is specified as
<reference_ID>:<cu_0>[.<clk_pin_0>][,<cu_n>[.<clk_pin_n>]]
:
<reference_ID>
: Defines the clock ID to use from the target platform.TIP: You can determine the available clock IDs for a target platform using theplatforminfo
utility as described in platforminfo Utility.<cu_0>[.<clk_pin_0>][,<cu_n>[.<clk_pin_n>]]
: Applies the defined frequency to the specified CUs and optionally to the specified clock pin on the CU.
For example:
v++ --link --clock.id 1:vadd_1,vadd_3
[clock]
section head using the following format:
[clock]
id=1:vadd_1,vadd_3
--clock.tolerance
--clock.tolerance <arg>
Specifies a clock tolerance as a value, or as a percentage of the
clock frequency. When specifying --clock.freqHz
,
you can also specify the tolerance with either a value or percentage. This will
update timing constraints to reflect the accepted tolerance. Where <arg> is
specified as
<tolerance>:<cu_0>[.<clk_pin_0>][,<cu_n>[.<clk_pin_n>]]
<tolerance>
: Can be specified either as a whole number, indicating theclock.freqHz
± the specified tolerance value; or as a percentage of the clock frequency specified as a decimal value.<cu_0>[.<clk_pin_0>][,<cu_n>[.<clk_pin_n>]]
: Applies the defined clock tolerance to the specified CUs, and optionally to the specified clock pin on the CU.
clock.FreqHz
when this option is not specified. For example:
v++ --link --clock.tolerance 0.10:vadd_1,vadd_3
[clock]
section head using the following format:
[clock]
tolerance=0.10:vadd_1,vadd_3
--connectivity Options
As discussed in Linking the Kernels, there
are a number of --connectivity.XXX
options that let
you define the topology of the FPGA binary, specifying the number of CUs, assigning them
to SLRs, connecting kernel ports to global memory, and establishing streaming port
connections. These commands are an integral part of the build process, critical to the
definition and construction of the application.
--connectivity.nk
--connectivity.nk <arg>
Where <arg>
is specified
as <kernel_name>:#:<cu_name1>.<cu_name2>...<cu_name#>
.
This instantiates the specified number of CU (#
) for the
specified kernel (kernel_name
) in the generated FPGA binary
(.xclbin
) file during the linking process. The
cu_name
is optional. If the cu_name
is not
specified, the instances of the kernel are simply numbered: kernel_name_1
, kernel_name_2
, and so
forth. By default, the Vitis compiler
instantiates one compute unit for each kernel.
For example:
v++ --link --connectivity.nk vadd:3:vadd_A.vadd_B.vadd_C
[connectivity]
section head using the following format:
[connectivity]
nk=vadd:3:vadd_A.vadd_B.vadd_C
--connectivity.slr
--connectivity.slr <arg>
Use this option to assign a CU to a specific SLR on the device. The option must be repeated for each kernel or CU being assigned to an SLR.
--connectivity.slr
to assign the kernel placement, then you must also
use --connectivity.sp
to assign memory access for
the kernel.Valid values include:
<cu_name>:<SLR_NUM>
Where:
<cu_name>
is the name of the compute unit as specified in the--connectivity.nk
option. Generally this will be<kernel_name>_1
unless a different name was specified.<SLR_NUM>
is the SLR number to assign the CU to. For example, SLR0, SLR1.
For example, to assign CU vadd_2
to SLR2, and CU fft_1
to SLR1, use the
following:
v++ --link --connectivity.slr vadd_2:SLR2 --connectivity.slr fft_1:SLR1
[connectivity]
section head using the following format:
[connectivity]
slr=vadd_2:SLR2
slr=fft_1:SLR1
--connectivity.sp
--connectivity.sp <arg>
Use this option to specify the assignment of kernel interfaces to system
ports within the platform. A primary use case for this option is to connect kernel
ports to specific memory resources. A separate --connectivity.sp
option is required to map each interface of a kernel
to a particular memory resource. Any kernel interface not explicitly mapped to a
memory resource through the --connectivity.sp
option will be automatically connected to an available memory resource during the
build process.
Valid values include:
<cu_name>.<kernel_interface_name>:<sptag[min:max]>
Where:
<cu_name>
is the name of the compute unit as specified in the--connectivity.nk
option. Generally this will be<kernel_name>_1
unless a different name was specified.<kernel_interface_name>
is the name of the function argument for the kernel, or compute unit port.<sptag>
represents a system port tag, such as for memory controller interface names from the target platform. Valid<sptag>
names include DDR, PLRAM, and HBM.[min:max]
enables the use of a range of memory, such as DDR[0:2]. A single index is also supported: DDR[2].
<sptag>
and range
of memory resources for a target platform can be obtained using the platforminfo
command. Refer to platforminfo Utility for more information.The following example maps the input argument (A) for the specified CU of the VADD kernel to DDR[0:3], input argument (B) to HBM[0:31], and writes the output argument (C) to PLRAM[2]:
v++ --link --connectivity.sp vadd_1.A:DDR[0:3] --connectivity.sp vadd_1.B:HBM[0:31] \
--connectivity.sp vadd_1.C:PLRAM[2]
[connectivity]
section head using the following format:
[connectivity]
sp=vadd_1.A:DDR[0:3]
sp=vadd_1.B:HBM[0:31]
sp=vadd_1.C:PLRAM[2]
--connectivity.sc
--connectivity.sc <arg>
Create a streaming connection between two compute units through
their AXI4-Stream interfaces. Use a separate
--connectivity.sc
command for each streaming
interface connection. The order of connection must be from a streaming output port
of the first kernel to a streaming input port of the second kernel. Valid values
include:
<cu_name>.<streaming_output_port>:<cu_name>.<streaming_input_port>[:<fifo_depth>]
Where:
<cu_name>
is the compute unit name specified in the--connectivity.nk
option. Generally this will be<kernel_name>_1
unless a different name was specified.<streaming_output_port>/<streaming_input_port>
is the function argument for the compute unit port that is declared as an AXI4-Stream.[:<fifo_depth>]
inserts a FIFO of the specified depth between the two streaming ports to prevent stalls. The value is specified as an integer.
For example, to connect the AXI4-Stream port s_out
of the
compute unit mem_read_1
to AXI4-Stream port s_in
of the compute unit increment_1
, use the following:
--connectivity.sc mem_read_1.s_out:increment_1.s_in
[connectivity]
section head using the following format:
[connectivity]
sc=mem_read_1.s_out:increment_1.s_in
The inclusion of the optional <fifo_depth> value lets the
v++
linker add a FIFO between the two kernels
to help prevent stalls. This will use BRAM resources from the device when specified,
but eliminates the need to update the HLS kernel to contain FIFOs. The tool will
also instantiate a Clock Converter (CDC) or Datawidth Converter (DWC) IP if the
connections have different clocks, or different bus widths.
--hls Options
The --hls.XXX
options described below are
used to specify options for the Vitis HLS
synthesis process invoked during kernel compilation.
--hls.clock
--hls.clock <arg>
Specifies a frequency in Hz at which the listed kernel(s) should be compiled by Vitis HLS.
Where <arg>
is specified as:
<frequency_in_Hz>:<cu_name1>,<cu_name2>,..,<cu_nameN>
<frequency_in_Hz>
: Defines the kernel frequency specified in Hz.<cu_name1>,<cu_name2>,...
: Defines a list of kernels or kernel instances (CUs) to be compiled at the specified target frequency.
For example:
v++ -c --hls.clock 300000000:mmult,mmadd --hls.clock 100000000:fifo_1
[hls]
section head using the following format:
[hls]
clock=300000000:mmult,mmadd
clock=100000000:fifo_1
--hls.export_mode
--hls.export_mode
Specifies an export mode from HLS with the path to an exported
file. The value is specified as <file_type>:<file_path>
.
Where <file_type>
can only
be specified as xo
for Xilinx object file.
For example:
v++ --hls.export_mode xo:./hls_kernel.xo
[hls]
section head using the following format:
[hls]
export_mode=xo:./hls_kernel.xo
--hls.export_project
--hls.export_project
Specifies a directory where the HLS project setup script is exported.
For example:
v++ --hls.export_project ./hls_export
[hls]
section head using the following format:
[hls]
export_project=./hls_export
--hls.max_memory_ports
--hls.max_memory_ports <arg>
Indicates that a separate AXI interface port should be created for each
argument of a kernel. If not enabled, the compiler creates a single AXI interface
combining all kernel ports of the same type. Valid values include all
kernels, or specify a <kernel_name>
.
This option is valid only for OpenCL kernels.
For example:
v++ --hls.max_memory_ports vadd
[hls]
section head using the following format:
[hls]
max_memory_ports=vadd:vadd_1
--hls.memory_port_data_width
--hls.memory_port_data_width <arg>
Sets the memory port data width to the specified <number>
for all kernels, or for a given
<kernel name>
. Valid values include
<number>
or <kernel_name>:<number>
.
Valid for OpenCL kernels.
For example:
v++ --hls.memory_port_data_width 256
[hls]
section head using the following format:
[hls]
memory_port_data_width=256
--linkhook Options
The --linkhook.XXX
options described
below are used to specify Tcl scripts to run at specific points during the Vitis linking process.
--linkhook.custom
--linkhook.custom <arg>
Where <arg>
is specified as <step name, path to script
file>
.
Specify a Tcl script to execute at a factory predefined point in an internal step. The path to specify the script can be an absolute path, or partial path relative to the build directory.
v++ -l --linkhook.custom step,runScript.tcl
-linkhook.do_first
--linkhook.do_first <arg>
Where <arg>
is specified as <step name, path to script
file>
.
Specify a Tcl script to execute as a precondition to the given step name. The path to specify the script can be an absolute path, or partial path relative to the build directory.
v++ -l --linkhook.do_first step,runScript.tcl
-linkhook.do_last
--linkhook.do_last <arg>
Where <arg>
is specified as <step name, path to script
file>
.
Specify a Tcl script to execute immediately after the given step completes. The path to specify the script can be an absolute path, or partial path relative to the build directory.
v++ -l --linkhook.do_last step,runScript.tcl
-linkhook.list_steps
--linkhook.list_steps
List run steps that support script hooks for a given target (use with
--target
). Also lists custom factory defined hooks by name.
v++ -l --linkhook.list_steps
--package Options
Introduction
The v++ -package
,
or -p
step, packages the final product at the end
of the v++
compile and link build process.
Some limitations of the --package
command include:
v++ -p
cannot be used with non-extensible ("fixed") platforms as found in the bare metal design flow.
The various options of --package
include the following:
--package.bl31_elf
--package.bl31_elf <arg>
Where <arg>
specifies the
absolute or relative path to Arm trusted FW
ELF that will execute on A72 #0 core.
For example:
v++ -l --package.bl31_elf ./arm_trusted.elf
--package.boot_mode
--package.boot_mode <arg>
Where <arg>
specifies
<ospi | qspi | sd>
Boot mode used for
running the application in emulation or on hardware.
For example:
v++ -l --package.boot_mode sd
--package.domain
--package.domain <arg>
Where <arg>
specifies a
domain name.
For example:
v++ -l --package.domain xrt
--package.dtb
--package.dtb <arg>
Where <arg>
specifies the
absolute or relative path to device tree binary (DTB) used for loading Linux on the
APU.
For example:
v++ -l --package.dtb ./device_tree.image
--package.image_format
--package.image_format <arg>
Where <arg>
specifies
<ext4 | fat32>
output image file
format.
- ext4: Linux file system
- fat32: Windows file system
For example:
v++ -l --package.image_format fat32
--package.kernel_image
--package.kernel_image <arg>
Where <arg>
specifies the absolute
or relative path to a Linux kernel image file. Overrides the existing image
available in the platform. The platform image file is available for download from
Xilinx.com. Refer to the Vitis Software Platform Installation for
more information.
For example:
v++ -l --package.kernel_image ./kernel_image
--package.no_image
--package.no_image
Bypass SD card image creation. Valid for
--package.boot_mode sd
.
--package.out_dir
--package.out_dir <arg>
Where <arg>
specifies the absolute
or relative path to the output directory of the --package
command.
For example:
v++ -l --package.out_dir ./out_dir
--package.ps_debug_port
--package.ps_debug_port <arg>
Where <arg>
specifies the
TCP port where emulator will listen for incoming connections from the debugger to
debug PS cores.
For example:
v++ -l --package.debug_port 3200
--package.ps_elf
--package.ps_elf <arg>
Where <arg>
specifies
<ps.elf,core>
.
- ps.elf: Specifies the ELF file for the PS core.
core
: Indicates the PS core it should run on.
For example:
v++ -l --package.ps_elf a72_0.elf,a72-0
--package.rootfs
--package.rootfs <arg>
Where <arg>
specifies the absolute
or relative path to a processed Linux root file system file. The platform RootFS
file is available for download from Xilinx.com. Refer to the Vitis Software Platform Installation for more information.
For example:
v++ -l --package.rootfs ./rootfs.ext4
--package.sd_dir
--package.sd_dir <arg>
Where <arg>
specifies a folder to
package into the sd_card
directory/image.
For example:
v++ -l --package.sd_dir ./test_data
--package.sd_file
--package.sd_file <arg>
Where <arg>
specifies an
ELF or other data file to package into the sd_card
directory/image. This option can be used repeatedly to specify
multiple files to add to the sd_card
.
For example:
v++ -l --package.sd_file ./arm_trusted.elf
--package.uboot
--package.uboot <arg>
Where <arg>
specifies a
path to U-boot ELF file which overrides a platform U-boot.
For example:
v++ -l --package.uboot ./uboot.elf
--vivado Options
The –-vivado.XXX
options are paired
with parameters and properties to configure the Vivado tools. For instance, you can configure optimization, placement,
and timing, or specify which reports to output.
--vivado.param
--vivado.param <arg>
Specifies parameters for the Vivado Design Suite to be used during synthesis and implementation of the FPGA binary (xclbin).
--vivado.prop
--vivado.prop <arg>
Specifies properties for the Vivado Design Suite to be used during synthesis and implementation of the FPGA binary (xclbin).
Property Name | Valid Values | Description |
---|---|---|
vivado.prop
<object_type>.<object_name>.<prop_name> |
Type: Various | This allows you to specify any property used in
the Vivado hardware compilation
flow.
The Examples:
If If |
v++ --link --vivado.prop run.impl_1.STEPS.PHYS_OPT_DESIGN.IS_ENABLED=true
--vivado.prop run.impl_1.STEPS.PHYS_OPT_DESIGN.ARGS.DIRECTIVE=Explore
--vivado.prop run.impl_1.STEPS.PLACE_DESIGN.TCL.PRE=/…/xxx.tcl
Explore
directive for that step, and
specifies a Tcl script to run before the PLACE_DESIGN step. [vivado]
section head using the following
format:
[vivado]
prop=run.impl_1.STEPS.PHYS_OPT_DESIGN.IS_ENABLED=true
prop=run.impl_1.STEPS.PHYS_OPT_DESIGN.ARGS.DIRECTIVE=Explore
prop=run.impl_1.STEPS.PLACE_DESIGN.TCL.PRE=/…/xxx.tcl
--vivado
options is important. You must surround the complete property
name with braces, rather than just a portion of it. For instance, the correct
placement would be:
--vivado_prop run.impl_1.{STEPS.PLACE_DESIGN.ARGS.MORE OPTIONS}={-fanout_opt}
--vivado_prop run.impl_1.STEPS.PLACE_DESIGN.ARGS.{MORE OPTIONS}={-fanout_opt}
Vitis Compiler Configuration File
A configuration file can also be used to specify the Vitis compiler options. A configuration file provides an organized way of
passing options to the compiler by grouping similar switches together, and minimizing
the length of the v++
command line. Some of the
features that can be controlled through config file entries include:
- HLS options to configure kernel compilation
- Connectivity directives for system linking such as the number of kernels to instantiate or the assignment of kernel ports to global memory
- Directives for the Vivado Design Suite to manage hardware synthesis and implementation.
In general, any v++
command option can be
specified in a configuration file. However, the configuration file supports defining
sections containing groups of related commands to help manage build options and
strategies. The following table lists the defined sections.
Section Name | Compiler/Linker | Description |
---|---|---|
[hls] | compiler | HLS directives --hls Options:
|
[clock] | compiler | Clock commands --clock Options:
|
[connectivity] | linker | --connectivity Options:
|
[vivado] | linker | --vivado Options:
|
[advanced] | either | --advanced Options:
|
Because the v++
command supports multiple
config files on a single v++
command line, you can
partition your configuration files into related options that define compilation and
linking strategies or Vivado implementation strategies, and apply
multiple config files during the build process.
Configuration files are optional. There are no naming restrictions on the
files and the number of configuration files can be zero or more. All v++
options can be put in a single configuration file if
desired. However, grouping related switches into separate files can help you organize
your build strategy. For example, group [connectivity]
related switches in one file, and [Vivado]
options
into a separate file.
The configuration file is specified through the use of the v++ --config
option as discussed in the Vitis Compiler General Options. An example of the --config
option follows:
v++ --config ../src/connectivity.cfg
Switches are read in the order they are encountered. If the same switch is repeated with conflicting information, the first switch read is used. The order of precedence for switches is as follows, where item one takes highest precedence:
- Command line switches.
- Config files (on command line) from left-to-right.
- Within a config file, precedence is from top-to-bottom.
Using the Message Rule File
The v++
command executes various Xilinx tools during kernel compilation and linking. These tools
generate many messages that provide build status to you. These messages might or might not be
relevant to you depending on your focus and design phase. The Message Rule file (.mrf) can be used to better manage these messages. It provides
commands to promote important messages to the terminal or suppress unimportant ones. This
helps you better understand the kernel build result and explore methods to optimize the
kernel.
The Message Rule file is a text file consisting of comments and supported commands. Only one command is allowed on each line.
Comment
Any line with “#
” as the first non-white space
character is a comment.
Supported Commands
By default, v++
recursively scans the
entire working directory and promotes all error messages to the v++
output. The promote
and suppress
commands below provide more control on the v++
output.
promote
: This command indicates that matching messages should be promoted to thev++
output.suppress
: This command indicates that matching messages should be suppressed or filtered from thev++
output. Note that errors cannot be suppressed.
Enter only one command per line.
Command Options
The Message Rule file can have multiple promote
and suppress
commands. Each command can have
one and only one of the options below. The options are case-sensitive.
-id [<message_id>]
: All messages matching the specified message ID are promoted or suppressed. The message ID is in format of nnn-mmm. As an example, the following is a warning message from HLS. The message ID in this case is 204-68.WARNING: [V++ 204-68] Unable to enforce a carried dependence constraint (II = 1, distance = 1, offset = 1) between bus request on port 'gmem' (/matrix_multiply_cl_kernel/mmult1.cl:57) and bus request on port 'gmem'-severity [severity_level]
For example, to suppress messages with message ID 204-68, specify the following:
suppress -id 204-68
.-severity [<severity_level>]
: The following are valid values for the severity level. All messages matching the specified severity level will be promoted or suppressed.info
warning
critical_warning
For example, to promote messages with severity of 'critical-warning', specify the following:
promote -severity critical_warning
.
Precedence of Message Rules
The suppress
rules take precedence over
promote
rules. If the same message ID or severity level is
passed to both promote
and suppress
commands in the Message Rule file, the matching messages are suppressed and
not displayed.
Example of Message Rule File
The following is an example of a valid Message Rule file:
# promote all warning, critical warning
promote -severity warning
promote -severity critical_warning
# suppress the critical warning message with id 19-2342
suppress -id 19-2342