Building the Device Binary
The kernel code is written in C, C++, OpenCL C, or RTL, and is built by compiling the kernel code into a Xilinx object file (.xo), and linking the .xo files into a device binary file (.xclbin), as shown in the following figure.
The process, as outlined above, has two steps:
- Build the Xilinx object files
from the kernel source code.
- For C, C++, or OpenCL kernels, the
v++ -c
command compiles the source code into Xilinx object (.xo) files. Multiple kernels are compiled into separate .xo files. - For RTL kernels, the
package_xo
command produces the .xo file to be used for linking. Refer to RTL Kernels for more information. - You can also create kernel object files (.xo) working directly in the Vitis HLS tool. Refer to Compiling Kernels with Vitis HLS for more information.
- For C, C++, or OpenCL kernels, the
- After compilation, the
v++ -l
command links one or multiple kernel objects (.xo), together with the hardware platform (.xsa), to produce the device binary (.xclbin).
v++
command can be used from the command line, in scripts, or a build
system like make
, and can also be used through the IDE
as discussed in Using the Vitis IDE.Compiling Kernels with Vitis Compiler
v++
options that need to be used to correctly compile your kernel. The
following is an example command line to compile the vadd
kernel:v++ -t sw_emu --platform xilinx_u200_xdma_201830_2 -c -k vadd \
-I'./src' -o'vadd.sw_emu.xo' ./src/vadd.cpp
The various arguments used are described below. Note that some of the arguments are required.
-t sw_emu
: Specifies the build target as software emulation, as discussed in Build Targets. Optional. The default is hw.--platform xilinx_u200_xdma_201830_2
: Specifies the accelerator platform for the build. This is required because runtime features, and the target platform are linked as part of the FPGA binary. To compile a kernel for an embedded processor application, you simply specify an embedded processor platform:--platform $PLATFORM_REPO_PATHS/zcu102_base/zcu102_base.xpfm
.-c
: Compile the kernel. Required. The kernel must be compiled (-c
) and linked (-l
) in two separate steps.-k vadd
: Name of the kernel associated with the source files../src/vadd.cpp
: Specify source files for the kernel. Multiple source files can be specified. Required.-o'vadd.sw_emu.xo'
: Specify the shared object file output by the compiler. Optional.
Refer to Vitis Compiler Command for details of the various command line options. Refer to Output Directories from the v++ Command to get an understanding of the location of various output files.
Compiling Kernels with Vitis HLS
The use model described for the Vitis core development kit is a top-down approach, starting with C/C++ or OpenCL code, and working toward compiled kernels.
However, you can also directly develop the kernel to produce a Xilinx® object (.xo) file to be
paired for linking using v++
to produce the .xclbin. This approach can be used for C/C++ kernels
using the Vitis HLS tool, which is the focus of
this section, or RTL kernels using the Vivado Design Suite. Refer to RTL Kernels for more information.
The approach of developing the kernel directly, either in RTL or C/C++, to produce an .xo file, is sometimes referred to as the bottom-up flow. This allows you to validate kernel performance and perform optimizations within the Vitis HLS tool, and export the Xilinx® object file for use in the Vitis application acceleration development flow. Refer to the Vitis HLS Flow for more information on using that tool.
The benefits of the Vitis HLS bottom-up flow can include:
- Design, validate, and optimize the kernel separately from the main application.
- Enables a team approach to design, with collaboration on host program and kernel development.
- Specific kernel optimizations are preserved in the .xo file.
- A collection of .xo files can be used and reused like a library.
Creating Kernels in Vitis HLS
Generating kernels from C/C++ code for use in the Vitis core development kit follows the standard Vitis HLS process. However, because the kernel is required to operate in the Vitis software platform, the standard kernel requirements must be satisfied (see Kernel Properties). Most importantly, the interfaces must be modeled as AXI memory interfaces, except for scalar parameters which are mapped to an AXI4-Lite interface. Vitis HLS automatically defines the interface ports to meet the standard kernel requirements when using the Vitis Bottom Up Flow as described here.
The process for creating and compiling your HLS kernel is outlined briefly below. You should refer to Creating a New Vitis HLS Project in the Vitis HLS Flow documentation for a more complete description of this process.
- Launch Vitis HLS to open the GUI, and specify .
- In the New Vitis HLS Project dialog box, specify the Project name, define the Location for the project, and click Next.
- In the Add/Remove files dialog box, click Add Files to add the kernel source code to the project. Select Top Function to define the kernel function by clicking the Browse button, and click Next when done.
- You can specify a C-based simulation test bench if you have one
available, by clicking Add Files, or skip
this by clicking Next. TIP: As discussed in the Vitis HLS documentation, the use of a test bench is strongly recommended.
- In the Solution Configuration dialog box, you must specify the
Clock Period for the kernel.
- Choose the target platform by clicking the Browse button in the Part Selection field to open the Device Selection dialog box. Select the Boards command, and select the target platform for your compiled kernel, as shown below. Click OK to select the platform and return to the Solution Configuration dialog box.
- In the Solution Configuration dialog box, enable the Vitis Bottom Up Flow check box, and click
Finish to complete the process and
create your HLS kernel project. IMPORTANT: You must enable the Vitis Bottom Up Flow to generate the Xilinx object (.xo) file from the project.
When the HLS project has been created you can Run C-Synthesis to compile the kernel code. Refer to the Vitis HLS documentation for a complete description of the HLS tool flow.
After synthesis is completed, the kernel can be exported as an .xo file for use in the Vitis core development kit. The export command is available through the command from the main menu.
Specify the file location, and the kernel is exported as a Xilinx object .xo file.
The (.xo) file can be used as an
input file during the v++
linking process. Refer to
Linking the Kernels for more information. You can also
add it to an application project in the Vitis
integrated design environment (IDE), as discussed in Creating a Vitis IDE Project.
However, keep in mind that HLS kernels, created in the bottom-up flow described here, have certain limitations when used in the Vitis application acceleration development flow. Software emulation is not supported for applications using HLS kernels, because duplicated header file dependencies can create issues. GDB debug is not supported in the hardware emulation flow for HLS kernels, or RTL kernels.
Vitis HLS Script for Creating Kernels
If you run HLS synthesis through Tcl scripts, you can edit the following script to create HLS kernels as previously described:
# Define variables for your HLS kernel:
set projName <proj_name>
set krnlName <kernel_name>
set krnlFile <kernel_source_code>
set krnlTB <kernel_test_bench>
set krnlPlatform <target_part>
set path <path_to_project>
#Script to create and output HLS kernel
open_project $projName
set_top $krnlName
add_files $krnlFile
add_files -tb $krnlTB
open_solution "solution1"
set_part $krnlPlatform
create_clock -period 10 -name default
config_flow -target vitis
csim_design
csynth_design
cosim_design
export_design -flow impl -format xo -output "./hlsKernel/hlsKernel.xo"
Run the HLS kernel script by using the following command after setting up your environment as discussed in Setting up the Vitis Environment.
vitis_hls -f <hls_kernel_script>.tcl
Packaging RTL Kernels with package_xo
Kernels written in RTL are compiled in the Vivado tool using the package_xo
command
line utility which generates a Xilinx object file
(.xo) which can subsequently used by the
v++
command, during the linking stage. (See package_xo Command.) The process for creating RTL kernels,
and using the package_xo
command to generate an
.xo file is described in RTL Kernels.
Linking the Kernels
The kernel compilation process results in a Xilinx object file (.xo) whether the kernel is written in C/C++, OpenCL C, or RTL. During the linking stage, .xo files from different kernels are linked with the platform to create the FPGA binary container file (.xclbin) used by the host program.
vadd
kernel
binary:v++ -t sw_emu --platform xilinx_u200_xdma_201830_2 --link vadd.sw_emu.xo \
-o'vadd.sw_emu.xclbin' --config ./connectivity.cfg
This command contains the following arguments:
-t sw_emu
: Specifies the build target. When linking, you must use the same-t
and--platform
arguments as specified when the input file (.xo) was compiled.--platform xilinx_u200_xdma_201830_2
: Specifies the platform to link the kernels with. To link the kernels for an embedded processor application, you simply specify an embedded processor platform:--platform $PLATFORM_REPO_PATHS/zcu102_base/zcu102_base.xpfm
--link
: Link the kernels and platform into an FPGA binary file (xclbin).vadd.sw_emu.xo
: Input object file. Multiple object files can be specified to build into the .xclbin.-o'vadd.sw_emu.xclbin'
: Specify the output file name. The output file in the link stage will be an .xclbin file. The default output name is a.xclbin--config ./connectivity.cfg
: Specify a configuration file that is used to providev++
command options for a variety of uses. Refer to Vitis Compiler Command for more information on the--config
option.
Beyond simply linking the Xilinx object files (.xo), the linking process is also where important architectural details are determined. In particular, this is where the number of compute unit (CUs) to instantiate into hardware is specified, connections from kernel ports to global memory are assigned, and CUs are assigned to SLRs. The following sections discuss some of these build options.
Creating Multiple Instances of a Kernel
By default, the linker builds a single hardware instance from a kernel. If the host program will execute the same kernel multiple times, due to data processing requirements for instance, then it must execute the kernel on the hardware accelerator in a sequential manner. This can impact overall application performance. However, you can customize the kernel linking stage to instantiate multiple hardware compute units (CUs) from a single kernel. This can improve performance as the host program can now make multiple overlapping kernel calls, executing kernels concurrently by running separate compute units.
Multiple CUs of a kernel can be created by using the connectivity.nk
option in the v++
config
file during linking. Edit a config file to include the needed options, and specify it in
the v++
command line with the --config
option, as described in Vitis Compiler Command.
vadd
kernel, two hardware
instances can be implemented in the config file as follows:
[connectivity]
#nk=<kernel name>:<number>:<cu_name>.<cu_name>...
nk=vadd:2
Where:
<kernel_name>
- Specifies the name of the kernel to instantiate multiple times.
<number>
- The number of kernel instances, or CUs, to implement in hardware.
<cu_name>.<cu_name>...
- Specifies the instance names for the specified number of instances. This is optional, and the CU name will default to kernel_1 when it is not specified.
v++
command line:
v++ --config vadd_config.txt ...
In the vadd
example above, the result is two
instances of the vadd
kernel, named vadd_1
and vadd_2
.
xclbinutil
command to examine the contents of the
xclbin file. Refer to xclbinutil Utility. vadd
kernel, named vadd_X
, vadd_Y
, and vadd_Z
in the
xclbin
binary file:
[connectivity]
nk=vadd:3:vadd_X.vadd_Y.vadd_Z
Mapping Kernel Ports to Global Memory
The link phase is when the memory ports of the kernels are connected to memory
resources which include DDR, HBM, and PLRAM. By default, when the xclbin
file is produced during the v++
linking
process, all kernel memory interfaces are connected to the same global memory bank (or gmem
). As a result, only one kernel interface can transfer data
to/from the memory bank at one time, limiting the performance of the application due to memory
access.
If the FPGA contains only one global memory bank, this is the only available approach. However, all of the Alveo Data Center accelerator cards contain multiple global memory banks. During the linking stage, you can specify which global memory bank each kernel port (or interface) is connected to. Proper configuration of kernel to memory connectivity is important to maximize bandwidth, optimize data transfers, and improve overall performance. Even if there is only one compute unit in the device, mapping its input and output ports to different global memory banks can improve performance by enabling simultaneous accesses to input and output data.
--conectivity.sp
option to distribute connections across
different memory banks.- Specify the kernel interface with different
bundle
names as discussed in Kernel Interfaces. - During
v++
linking, use theconnectivity.sp
option in a config file to map the kernel port to the desired memory bank.
To map the kernel ports to global memory banks using the connectivity.sp
option of the v++
config file, use the following steps.
- Starting with the kernel code example from Kernel Interfaces:
void cnn( int *pixel, // Input pixel int *weights, // Input Weight Matrix int *out, // Output pixel ... // Other input or Output ports #pragma HLS INTERFACE m_axi port=pixel offset=slave bundle=gmem #pragma HLS INTERFACE m_axi port=weights offset=slave bundle=gmem1 #pragma HLS INTERFACE m_axi port=out offset=slave bundle=gmem
Note that the memory interface inputs
pixel
andweights
are assigned different bundle names in the example above. This creates two separate ports that can be assigned to separate global memory banks.IMPORTANT: You must specifybundle=
names using all lowercase characters to be able to assign it to a specific memory bank using the--connectivity.sp
option. - During
v++
linking, the separate ports can be mapped to different global memory banks. Edit a config file to include the--connectivity.sp
option, and specify it in thev++
command line with the--config
option, as described in Vitis Compiler Command.For example, for thecnn
kernel shown above, theconnectivity.sp
option in the config file would be as follows:[connectivity] #sp=<compute_unit_name>.<interface_name>:<bank name> sp=cnn_1.m_axi_gmem:DDR[0] sp=cnn_1.m_axi_gmem1:DDR[1]
Where:<compute_unit_name>
is an instance name of the CU as determined by theconnectivity.nk
option, described in Creating Multiple Instances of a Kernel, or is simply<kernel_name>_1
if multiple CUs are not specified.<interface_name>
is the name of the kernel port as defined by the HLS INTERFACE pragma, includingm_axi_
and thebundle
name. In thecnn
kernel above, the ports would bem_axi_gmem
andm_axi_gmem1
.TIP: If the port is not specified as part of a bundle, then the<interface_name>
is simply the specifiedport
name, without them_axi_
prefix.<bank_name>
is denoted asDDR[0]
,DDR[1]
,DDR[2]
, andDDR[3]
for a platform with four DDR banks. Some platforms also provide support for PLRAM, HBM, HP or MIG memory, in which case you would use PLRAM[0], HBM[0], HP[0] or MIG[0]. You can use theplatforminfo
utility to get information on the global memory banks available in a specified platform. Refer to platforminfo Utility for more information.IMPORTANT: The customized bank connection needs to be reflected in the host code as well, as described in Assigning DDR Bank in Host Code.
Specify Streaming Connections between Compute Units
The streaming data ports of kernels can be connected during v++
linking using the --connectivity.sc
command. This command can be specified at the command
line, or from a config file that is specified using the --config
option, as described in Vitis Compiler Command.
To connect the streaming output port of a producer kernel to the streaming
input port of a consumer kernel, setup the connection in the v++
config file using the connectivity.stream_connect
option as follows:
[connectivity]
#stream_connect=<cu_name>.<output_port>:<cu_name>.<input_port>:[<fifo_depth>]
stream_connect=vadd_1.stream_out:vadd_2.stream_in
Where:
<cu_name>
is an instance name of the CU as determined by theconnectivity.nk
option, described in Creating Multiple Instances of a Kernel.<output_port>
or<input_port>
is the streaming port defined in the producer or consumer kernel as described in Streaming Kernel Coding Guidelines, or as described in Coding Guidelines for Free-Running Kernels.[:<fifo_depth>]
inserts a FIFO of the specified depth between the two streaming ports to prevent stalls. The value is specified as an integer.
Assigning Compute Units to SLRs
Currently, the Xilinx devices on Alveo Data Center accelerator cards use stacked silicon devices with several Super Logic Regions (SLRs) to provide device resources, including global memory. When assigning ports to global memory banks, as described in Mapping Kernel Ports to Global Memory, it is best that the CU instance is assigned to the same SLR as the global memory it is connected to. In this case, you will want to manually assign the kernel instance, or CU into the same SLR as the global memory to ensure the best performance.
A CU can be assigned into an SLR during the v++
linking
process using the connectivity.slr
option in a config
file, and specified with the --config
option in the
v++
command line. The syntax of the connectivity.slr
option in the config file is as follows:
[connectivity]
#slr=<compute_unit_name>:<slr_ID>
slr=vadd_1:SLR2
slr=vadd_2:SLR3
where:
<compute_unit_name>
is an instance name of the CU as determined by theconnectivity.nk
option, described in Creating Multiple Instances of a Kernel, or is simply<kernel_name>_1
if multiple CUs are not specified.<slr_ID>
is the SLR number to which the CU is assigned, in the form SLR0, SLR1,...
The assignment of a CU to an SLR must be specified for each CU
separately, but is not required. In the absence of an SLR assignment, the
v++
linker is free to assign the CU to any SLR.
v++
linking process by specifying the config file using the
--config
option:
v++ -l --config config_slr.txt ...
Managing FPGA Synthesis and Implementation Results in the Vivado Tool
Introduction
In most cases, the Vitis environment completely abstracts away the underlying process of synthesis and implementation of the programmable logic region, as the CUs are linked with the hardware platform and the FPGA binary (xclbin) is generated. This removes the application developer from the typical hardware development process, and the need to manage constraints such as logic placement and routing delays. The Vitis tool automates much of the FPGA implementation process.
However, in some cases you might want to exercise some control over the
synthesis and implementation processes deployed by the Vitis compiler, especially when large designs are being implemented.
Towards this end, the Vitis tool offers some
control through specific options that can be specified in a v++
configuration file, or from the command line. The following are
some of the ways you can interact with and control the Vivado synthesis and implementation results.
- Using the
--vivado
option to manage the Vivado tool. - Using the
-to_step
and-from_step
options to run the compilation or linking process to a specific step, perform some manual intervention on the design, and resume from that step. - Interactively editing the Vivado project, and using the results for generating the FPGA binary.
Using the --vivado and --advanced Options
Using the --vivado
option, as described in
--vivado Options, and the
--advanced
option as described in --advanced Options, you can perform a number of interventions on the
standard Vivado synthesis or
implementation.
- Pass Tcl scripts, with custom design constraints or scripted operations.
You can create Tcl scripts to assign XDC design constraints to objects in the design, and pass these Tcl scripts to the Vivado tools using the PRE and POST Tcl script properties of the synthesis and implementation steps. For more information on Tcl scripting, refer to the Vivado Design Suite User Guide: Using Tcl Scripting (UG894). While there is only one synthesis step, there are a number of implementation steps as described in the Vivado Design Suite User Guide: Implementation (UG904). You can assign Tcl scripts for the Vivado tool to run before the step (PRE), or after the step (POST). The specific steps you can assign Tcl scripts to include the following:
SYNTHESIS
,INIT_DESIGN
,OPT_DESIGN
,PLACE_DESIGN
,ROUTE_DESIGN
,WRITE_BITSTREAM
.TIP: There are also some optional steps that can be enabled using the--vivado.prop run.impl_1.steps.phys_opt_design.is_enabled=1
option. When enabled, these steps can also have Tcl PRE and POST scripts.An example of the Tcl PRE and POST script assignments follow:--vivado.prop run.impl_1.STEPS.PLACE_DESIGN.TCL.PRE=/…/xxx.tcl
In the preceding example a script has been assigned to run before the PLACE_DESIGN step. The command line is broken down as follows:
--vivado
is thev++
command-line option to specify directives for the Vivado tools.prop
keyword to indicate you are passing a property setting.run.
keyword to indicate that you are passing a run property.impl_1.
indicates the name of the run.STEPS.PLACE_DESIGN.TCL.PRE
indicates the run property you are specifying.- /.../xx.tcl indicates the property value.
TIP: Both the--advanced
and--vivado
options can be specified on thev++
command line, or in a configuration file specified by the--config
option. The example above shows the command line use, and the following example shows the config file usage. Refer to Vitis Compiler Configuration File for more information. - Setting properties on run, file, and fileset design objects.This is very similar to passing Tcl scripts as described above, but in this case you are passing values to different properties on multiple design objects. For example, to use a specific implementation strategy such as
Performance_Explore
, you can define the properties as shown below:[vivado] prop=run.impl_1.STEPS.OPT_DESIGN.ARGS.DIRECTIVE=Explore prop=run.impl_1.STEPS.PLACE_DESIGN.ARGS.DIRECTIVE=Explore prop=run.impl_1.STEPS.PHYS_OPT_DESIGN.IS_ENABLED=true prop=run.impl_1.STEPS.PHYS_OPT_DESIGN.ARGS.DIRECTIVE=Explore prop=run.impl_1.STEPS.ROUTE_DESIGN.ARGS.DIRECTIVE=Explore
In the example above, the
Explore
value is assigned to theSTEPS.XXX.DIRECTIVE
property of the implementation run. Note the syntax for defining these properties is:<object>.<instance>.property=<value>
Where:
<object>
can be a design run, a file, or a fileset object.<instance>
indicates a specific instance of the object.<property>
specifies the property to assign.<value>
defines the value of the property.
- Passing parameters to the tool to control processing.The
--vivado
option also allows you to pass parameters to the Vivado tools. The parameters are used to configure the tool features or behavior prior to launching the tool. The syntax for specifying a parameter uses the following form:--vivado.param <object><parameter>=<value>
The keyword
param
indicates that you are passing a parameter for the Vivado tools, rather than a property for a design object. You must also define the<object>
it applies to, the<parameter>
that you are specifying, and the<value>
to assign it.In the following example project indicates the current Vivado project,writeIntermedateCheckpoints
is the parameter being passed, and the value is 1, which enables this boolean parameter.--vivado.param project.writeIntermediateCheckpoints=1
- Managing the reports generated during synthesis and implementation.IMPORTANT: You must also specify
--save-temps
on thev++
command line when customizing the reports generated by the Vivado tool in order to preserve the temporary files created during synthesis and implementation, including any generated reports.You may also want to generate or save more than the standard reports provided by the Vivado tools when run as part of the Vitis tools build process. You can customize the reports generated using the--advanced.misc
option as follows:[advanced] misc=report=type report_utilization name synth_report_utilization_summary steps {synth_design} runs {__KERNEL__} options {} misc=report=type report_timing_summary name impl_report_timing_summary_init_design_summary steps {init_design} runs {impl_1} options {-max_paths 10} misc=report=type report_utilization name impl_report_utilization_init_design_summary steps {init_design} runs {impl_1} options {} misc=report=type report_control_sets name impl_report_control_sets_place_design_summary steps {place_design} runs {impl_1} options {-verbose} misc=report=type report_utilization name impl_report_utilization_place_design_summary steps {place_design} runs {impl_1} options {} misc=report=type report_io name impl_report_io_place_design_summary steps {place_design} runs {impl_1} options {} misc=report=type report_bus_skew name impl_report_bus_skew_route_design_summary steps {route_design} runs {impl_1} options {-warn_on_violation} misc=report=type report_clock_utilization name impl_report_clock_utilization_route_design_summary steps {route_design} runs {impl_1} options {}
The syntax of the command line is explained using the following example:misc=report=type report_bus_skew name impl_report_bus_skew_route_design_summary steps {route_design} runs {impl_1} options {-warn_on_violation}
misc=report=
: Specifies the--advanced.misc
option as described in --advanced Options, and defines the report configuration for the Vivado tool. The rest of the command line is specified in name/value pairs, reflecting the options of thecreate_report_config
Tcl command as described in Vivado Design Suite Tcl Command Reference Guide (UG835).type report_bus_skew
: Relates to the-report_type
argument, and specifies the type of the report as thereport_bus_skew
. Most of thereport_*
Tcl commands can be specified as the report type.name impl_report_bus_skew_route_design_summary
: Relates to the-report_name
argument, and specifies the name of the report. Note this is not the file name of the report, and generally this option can be skipped as the report names will be auto-generated by the tool.steps {route_design}
: Relates to the-steps
option, and specifies the synthesis and implementation steps that the report applies to. The report can be specified for use with multiple steps to have the report regenerated at each step, in which case the name of the report will be automatically defined.runs {impl_1}
: Relates to the-runs
option, and specifies the name of the design runs to apply the report to.options {-warn_on_violation}
: Specifies various options of thereport_*
Tcl command to be used when generating the report. In this example, the-warn_on_violation
option is a feature of thereport_bus_skew
command.IMPORTANT: There is no error checking to ensure the specified options are correct and applicable to the report type specified. If you indicate options that are incorrect the report will return an error when it is run.
Running --to_step or --from_step
--to_step
and --from_step
options are incremental build options that require you to use the same project
directory when launching the Vitis compiler
using --from_step
to resume the build as you specified when using
--to_step
to start the build. The Vitis compiler lets you specify a
step to run to that lets you stop the build process after completing the specified
step, manually intervene in the design or files in some way, and then rerun the
build specifying a step the build should start from. The commands to do this are the
--to_step
, to run the build process through
that step, and --from_step
to resume the build
from the specified step of the Vitis compiler,
as described in Vitis Compiler General Options.
--list_steps
option to list the available steps for the compilation or linking processes of a
specific build target. For example, the list of steps for the link process of the
hardware build can be found
by:v++ --list_steps --target hw --link
This command returns a number of steps, both default steps and optional steps
that the Vitis compiler goes through during the
linking process of the hardware build. Some of the default steps include:
system_link
, vpl
,
vpl.create_project
, vpl.create_bd
,
vpl.generate_target
, vpl.synth
,
vpl.impl.opt_design
, vpl.impl.place_design
,
vpl.impl.route_design
, and
vpl.impl.write_bitstream
.
vpl.impl.power_opt_design
,
vpl.impl.post_place_power_opt_design
,
vpl.impl.phys_opt_design
, and
vpl.impl.post_route_phys_opt_design
. --from_step
or --to_step
. For
example, to enable PHYS_OPT_DESIGN step, use the following config file
content:[vivado]
prop=run.impl_1.steps.phys_opt_design.is_enabled=1
Launching the Vivado IDE for Interactive Design
--to_step
command, you can launch
the build process to Vivado synthesis, for
example, and then launch the Vivado IDE on the
project to manually place and route the design. To do this you would use the
following command syntax:
v++ --target hw --link --to_step vpl.synth --save-temps --platform <PLATFORM_NAME> <XO_FILES>
--save-temps
when
using --to_step
to preserve any temporary files
required by the build process. This command specifies the link process of the hardware build, runs the build through the synthesis step, and saves the temporary files produced by the build process.
You can launch the Vivado tool directly on the project built by the Vitis compiler, which you can find at _x/link/vivado/vpl/prj in your build directory. When invoking the Vivado IDE in this mode, you can open the synthesis or implementation run to manage and modify the project. You can change the run details as needed to close timing and try different approaches to implementation. You can save the results to a design checkpoint (DCP) to use in the Vitis environment to generate the FPGA binary.
After saving the DCP from within the Vivado IDE, you can close the tool and return to the Vitis environment. Use the --reuse_impl
option to use a previously implemented DCP file in the
v++ command line to generate the xclbin
.
--reuse_impl
option is an incremental build
option that requires you to use the same project directory when resuming the
Vitis compiler with
--reuse_impl
that you specified when using
--to_step
to start the build. v++ --link --platform <PLATFORM_NAME> -o'project.xclbin' project.xo --reuse_impl ./_x/link/vivado/routed.dcp
Additional Vivado Options
Some additional switches that can be used in the v++
command line or config file include the following:
--export_script
/--custom_script
edit and use an HLS Tcl script to modify the compilation process.--interactive
allows the Vivado IDE to be launched from within thev++
environment, with the project loaded.--remote_ip_cache
specify a remote IP cache directory for Vivado synthesis.--no_ip_cache
turn off the IP cache for Vivado synthesis. This causes all IP to be resynthesized as part of the build process, scrubbing out cached data.
Controlling Report Generation
The v++
-R
option (or --report_level
) controls
the level of information to report during compilation or linking for hardware emulation
and system targets. Builds that generate fewer reports will typically run more
quickly.
The command line option is as follows:
$ v++ -R <report_level>
Where <report_level>
is one of the
following options:
-R0
: Minimal reports and no intermediate design checkpoints (DCP).-R1
: Includes R0 reports plus:- Identifies design characteristics to review for each kernel
(
report_failfast
). - Identifies design characteristics to review for the full post-optimization design.
- Saves post-optimization design checkpoint (DCP) file for later examination or use in the Vivado Design Suite.
TIP:report_failfast
is a utility that highlights potential device usage challenges, clock constraint problems, and potential unreachable target frequency (MHz).- Identifies design characteristics to review for each kernel
(
-R2
: Includes R1 reports plus:- Includes all standard reports from the Vivado tools, including saved DCPs after each implementation step.
- Design characteristics to review for each SLR after placement.
-Restimate
: Forces Vitis HLS to generate a System Estimate report, as described in System Estimate Report.TIP: This option is useful for the software emulation build (-t sw_emu
).