Menu

It works every time

... 60% of the time

Automation of Vivado with Tcl - Week 3 of GSoC 2016

As promised in my last post I will now explain how to automate Vivado with Tcl scripts in order to built a RISC-V RV64G core bitstream for the Parallella board. Such scripts are great because unlike the steps performed by the designer in the GUI, they can be automated and incorporated into the command line flow of the user. Such a command line flow, although having an initial learning curve, is quicker, more robust and generally has reproducible results. Moreover the Tcl scripts can be easily commited into a VCS repository with all the benefits that this entails.

UPDATE: If you read this before July 15th, please re-copy the scripts or re-download the archive with all the necessary files discussed in the post.

Tcl scripting is very easy since Tcl is a language designed for easier tool control and it wouldn't make sense if it was difficult to write (warning: this is not a universal truth in computing!). However one must know the tool(s) that the script will use in order to make anything usefull with it. Luckily all of the Vivado tools when run in GUI mode produce in the Tcl console (found at one of the bottom windows) equivalent Tcl commands with those the user executed in the GUI. So until you learn all of the commands needed (they are literally hundreds) you can just perform GUI actions and log them somewhere for future reference when writing your Tcl scripts.

As a matter of fact you don't even have to log these commands since Vivado wants to make our life easier and writes every Tcl command it executes, along with a log of operations in the Vivado.log and Vivado.jou (journal) files you see on the directory where you open the Vivado GUI. And when you need an even better reference with all the available commands and the parameters they accept you can browse the Vivado Design Suite Tcl Command Reference Guide (UG835). Unfortunately this guide is a little incomplete since the IP packager commands (those with the ipx: prefix) are not currently documented and you have to make a note of them while using the GUI. See my previous post which walks you through the Vivado IP packager for RISC-V RV64G core.

Let's now create a Tcl script that will be able to package our RISC-V RV64G core that we produced with rocket-chip generator. Our script will be able to use the Verilog RTL source files of the IP in order to package it as an IP suitable for Vivado and then use it within a block design (more on this later in this post). After that it will synthesize, implement the design and build a final bitstream that we can program our FPGA device.

Before I start describing the script I want to note that most of the Tcl you see was taken from the wonderfull Parallella OH repository that contains great automation facilities to build the designs it contains. After studying a lot ot Tcl for my GSoC project, along with various repositories that did the same kind of builds, I decided to use the parallella/oh repo's Tcl structure since in my opionon it is the most clear and easy to use of them all. Most of it is written by my co-mentor Andreas Olofsson who has tons of experience with VLSI and FPGA designs. The same can be said for my other mentor Olof Kindgren, long time OpenRISC developer and heroic FuseSoC developer. They are clearly the best mentors I could have and the only regret is that my GSoC project is probably to easy to seriously need their expertise on it! Ok enough with the chit chat let's get dirty with Tcl!

IP Settings

First of all our script will have some settings that dicatate the IP's details. In our case it's a RISC-V RV64G IP for the Zynq device xc7z020clg400-1. The source files should be placed in an hdl folder which in turn should be inside the folder that contains the Tcl script.

If you own the smaller Parallella board with the Zynq device xc7z010clg400-1 you won't be able to produce a valid bitstream since the full RISC-V RV64G core won't fit in this device. In order to overcome this you can generate a new RISC-V core without an FPU (a.k.a RISC-V RV64IMA core). I have blogged on how to use the rocket-chip generator previously, you just have to make sure to use the following command to generate the new core: cd rocket-chip/fsim && make verilog CONFIG=DefaultFPGASmallConfig (replacing the part at the end of that post). After the core is generated in rocket-chip/fsim/generated-srcs/TopDefaultFPGASmallConfig.v you must replace the RV64G.v file in the archive given at the end of the current post with it.

#############
# IP Settings
#############

set design riscv_rv64g

set projdir ./riscv_rv64g/

set root "."

# FPGA device
set partname "xc7z020clg400-1"

# Board part
# (not needed for Parallella board -
# for ZedBoard use "em.avnet.com:zed:part0:1.3")
set boardpart ""

set hdl_files [list $root/hdl/]

set ip_files []

set constraints_files []

# Other variables
set clk_m_axi "m_axi_aclk"
set clk_s_axi "s_axi_aclk"

Package IP
The following part creates a managed IP project and then adds all the needed hdl or constraint (none) files inside it. At the end it calls the IP packager to bundle it using hardcoded properties that you might want to move into the above settings as variables.

###########################
# Create Managed IP Project
###########################

create_project -force $design $projdir -part $partname 
set_property target_language Verilog [current_project]
set_property source_mgmt_mode None [current_project]

if {$boardpart != ""} {
set_property "board_part" $boardpart [current_project]
}

##########################################
# Create filesets and add files to project
##########################################

#HDL
if {[string equal [get_filesets -quiet sources_1] ""]} {
    create_fileset -srcset sources_1
}

add_files -norecurse -fileset [get_filesets sources_1] $hdl_files

set_property top $design [get_filesets sources_1]

#CONSTRAINTS
if {[string equal [get_filesets -quiet constraints_1] ""]} {
  create_fileset -constrset constraints_1
}
if {[llength $constraints_files] != 0} {
    add_files -norecurse -fileset [get_filesets constraints_1] $constraints_files
}

#ADDING IP
if {[llength $ip_files] != 0} {
    
    #Add to fileset
    add_files -norecurse -fileset [get_filesets sources_1] $ip_files
   
    #RERUN/UPGRADE IP
    upgrade_ip [get_ips]
}

##########################################
# Synthesize (Optional, checks for sanity)
##########################################

#set_property top $design [current_fileset]
#launch_runs synth_1 -jobs 2
#wait_on_run synth_1


#########
# Package
#########

ipx::package_project -import_files -force -root_dir $projdir
ipx::associate_bus_interfaces -busif s_axi -clock $clk_s_axi [ipx::current_core]
ipx::associate_bus_interfaces -busif m_axi -clock $clk_m_axi [ipx::current_core]

ipx::remove_memory_map {s_axi} [ipx::current_core]
ipx::add_memory_map {s_axi} [ipx::current_core]
set_property slave_memory_map_ref {s_axi} [ipx::get_bus_interfaces s_axi -of_objects [ipx::current_core]]
ipx::add_address_block {axi_lite} [ipx::get_memory_maps s_axi -of_objects [ipx::current_core]]
set_property range {65536} [ipx::get_address_blocks axi_lite -of_objects \
    [ipx::get_memory_maps s_axi -of_objects [ipx::current_core]]]

set_property vendor              {www.parallella.org}    [ipx::current_core]
set_property library             {user}                  [ipx::current_core]
set_property taxonomy            {{/AXI_Infrastructure}} [ipx::current_core]
set_property vendor_display_name {ADAPTEVA}              [ipx::current_core]
set_property company_url         {www.parallella.org}    [ipx::current_core]
set_property supported_families  { \
                     {virtex7}    {Production} \
                     {qvirtex7}   {Production} \
                     {kintex7}    {Production} \
                     {kintex7l}   {Production} \
                     {qkintex7}   {Production} \
                     {qkintex7l}  {Production} \
                     {artix7}     {Production} \
                     {artix7l}    {Production} \
                     {aartix7}    {Production} \
                     {qartix7}    {Production} \
                     {zynq}       {Production} \
                     {qzynq}      {Production} \
                     {azynq}      {Production} \
                     }   [ipx::current_core]

############################
# Save and Write ZIP archive
############################

ipx::create_xgui_files [ipx::current_core]
ipx::update_checksums [ipx::current_core]
ipx::save_core [ipx::current_core]
ipx::check_integrity -quiet [ipx::current_core]
ipx::archive_core [concat $projdir/$design.zip] [ipx::current_core]

System Settings
The next part contains more settings, this time for our system design / project that will contain the above packaged IP along with a block design that instantiates the IP and connects it with the processing system of the Zynq device.

#################
# System Settings
#################

#Design name ("system" recommended)
set design system

#Project directory
set projdir ./parallella_riscv/

#Device name
set partname "xc7z020clg400-1"

#Board part
set boardpart ""

#Paths to all IP blocks to use in Vivado "system.bd"
set ip_repos [list "./riscv_rv64g"]

#System's extra source files
set hdl_files []

#System's constraints files
set constraints_files []

System Project

The next part creates the system project and adds to it the IP we packaged and a block design (described later). You can open the project it generates with Vivado (./parallella_riscv/system.xpr) if you want to inspect the design or perform any tasks using the Vivado GUI (e.g see the block design it contains).

################
# CREATE PROJECT
################

create_project -force $design $projdir -part $partname
set_property target_language Verilog [current_project]

if {$boardpart != ""} {
set_property "board_part" $boardpart [current_project]
}

#################################
# Create Report/Results Directory
#################################

set report_dir  $projdir/reports
set results_dir $projdir/results
if ![file exists $report_dir]  {file mkdir $report_dir}
if ![file exists $results_dir] {file mkdir $results_dir}

####################################
# Add IP Repositories to search path
####################################

set other_repos [get_property ip_repo_paths [current_project]]
set_property  ip_repo_paths  "$ip_repos $other_repos" [current_project]

update_ip_catalog

#####################################
# CREATE BLOCK DESIGN (GUI/TCL COMBO)
#####################################

create_bd_design "system"

source ./system_bd.tcl
make_wrapper -files [get_files $projdir/${design}.srcs/sources_1/bd/system/system.bd] -top

###########################################################
# ADD FILES
###########################################################

#HDL
if {[string equal [get_filesets -quiet sources_1] ""]} {
    create_fileset -srcset sources_1
}
set top_wrapper $projdir/${design}.srcs/sources_1/bd/system/hdl/system_wrapper.v
add_files -norecurse -fileset [get_filesets sources_1] $top_wrapper

if {[llength $hdl_files] != 0} {
    add_files -norecurse -fileset [get_filesets sources_1] $hdl_files
}

#CONSTRAINTS
if {[string equal [get_filesets -quiet constrs_1] ""]} {
  create_fileset -constrset constrs_1
}
if {[llength $constraints_files] != 0} {
    add_files -norecurse -fileset [get_filesets constrs_1] $constraints_files
}

Build Bitstream

Finally this part validates the design and then launches synthesis and implementation tasks. This can take a while of-course. At then end it produces a bitstream (./parallella_riscv/system.runs/impl_1/system_wrapper.bit). Note that you cannot place this bitstream directly in your SD card and the conversion needed with the bootgen utility (.bit -> bit.bin) is discused at the end of the post. You can however program with this .bit file the FPGA of an already booted Parallella, using the /dev/xdevcfg device node present in newer Parallella Linux kernels. This method of programming will not be shown here but it's very easy to find info elsewhere.

##############################################
# Validate design and create top-level wrapper
##############################################

validate_bd_design
make_wrapper -files [get_files $projdir/${design}.srcs/sources_1/bd/system/system.bd] -top
remove_files -fileset sources_1 $projdir/${design}.srcs/sources_1/bd/system/hdl/system_wrapper.v
add_files -fileset sources_1 -norecurse $projdir/${design}.srcs/sources_1/bd/system/hdl/system_wrapper.v

###########
# Synthesis
###########

launch_runs synth_1
wait_on_run synth_1

# Report timing summary (optional)
#report_timing_summary -file synth_timing_summary.rpt

#################
# Place and route
#################

launch_runs impl_1
wait_on_run impl_1

# Report timing summary (optional)
#report_timing_summary -file impl_timing_summary.rpt

# Create netlist (optional)
#write_verilog ./system.v

#################
# Write Bitstream
#################

launch_runs impl_1 -to_step write_bitstream
wait_on_run impl_1

exit

Block Design

As mentioned above there is a system block design that connects the RISC-V IP with the processing system (PS) of the Zynq device. It also connects various other blocks such as AXI interconnects for master and slave ports respectively, a clocking wizard that instantiates an MMCM clock manager (PLL) on the FPGA fabric's resources which divides the regular 100 MHz clock from PS by 4 to produce a 25 MHz clock that feeds RISC-V and its AXI buses / interconnects.

Near the end it also declares the memory maps of the RV64G core. The AXI slave port that is used to send Host I/O (HTIF) commands is mapped to address 0x43c00000 (4 KB). The AXI master port is used to make DRAM memory accesses and it is mapped with the ability to access all 1 GB of Parallella's DRAM. In practice our Verilog wrapper restricts the core to the 0x30000000 - 0x3e000000 range (228 MB) and the device tree blob (DTB) that we will use reserves this area from the system / Linux kernel.

This block design should be usable by whatever Vivado version you have (e.g 2016.1). To do so I implemented a trick for the IP versions that I instantiate within it, where I removed the version number from the VLNV string (Vendor, Library, Name, Version). It seems that removing the last part (version) is acceptable by Vivado and it uses whatever version of the IP it has in its catalog / library (no version is like a wildcard) which is exactly what is needed to make the script independent of the Vivado version. Of-course this trick makes the system build-able, but not necessarily synthesizable and / or functional since the IPs might have changed their ports between versions. For mainstream IPs like PS7, AXI interconnect, clk_wiz etc hopefully Xilinx will not change their I/O ports...

You must save the block design in a separate file named system.bd.tcl as that is how it is mentioned on the above Tcl part that uses it. Moreover for the Tcl of the block design I kept the same format that Vivado produces when you execute a File -> Export -> Export Block Design menu options. This allows you to change the block design and then exporting it on top of this file (overwriting it) without anything getting lost in the process.


################################################################
# This is a generated script based on design: system
#
# Though there are limitations about the generated script,
# the main purpose of this utility is to make learning
# IP Integrator Tcl commands easier.
################################################################

################################################################
# Check if script is running in correct Vivado version.
################################################################
set scripts_vivado_version 2015.4
set current_vivado_version [version -short]

if { [string first $scripts_vivado_version $current_vivado_version] == -1 } {
   puts ""
   puts "ERROR: This script was generated using Vivado <$scripts_vivado_version> and is being run in <$current_vivado_version> of Vivado. Please run the script in Vivado <$scripts_vivado_version> then open the design in Vivado <$current_vivado_version>. Upgrade the design by running \"Tools => Report => Report IP Status...\", then run write_bd_tcl to create an updated script."

   return 1
}

################################################################
# START
################################################################

# To test this script, run the following commands from Vivado Tcl console:
# source system_script.tcl

# If you do not already have a project created,
# you can create a project using the following command:
#    create_project project_1 myproj -part xc7z020clg400-1

# CHECKING IF PROJECT EXISTS
if { [get_projects -quiet] eq "" } {
   puts "ERROR: Please open or create a project!"
   return 1
}



# CHANGE DESIGN NAME HERE
set design_name system

# If you do not already have an existing IP Integrator design open,
# you can create a design using the following command:
#    create_bd_design $design_name

# Creating design if needed
set errMsg ""
set nRet 0

set cur_design [current_bd_design -quiet]
set list_cells [get_bd_cells -quiet]

if { ${design_name} eq "" } {
   # USE CASES:
   #    1) Design_name not set

   set errMsg "ERROR: Please set the variable <design_name> to a non-empty value."
   set nRet 1

} elseif { ${cur_design} ne "" && ${list_cells} eq "" } {
   # USE CASES:
   #    2): Current design opened AND is empty AND names same.
   #    3): Current design opened AND is empty AND names diff; design_name NOT in project.
   #    4): Current design opened AND is empty AND names diff; design_name exists in project.

   if { $cur_design ne $design_name } {
      puts "INFO: Changing value of <design_name> from <$design_name> to <$cur_design> since current design is empty."
      set design_name [get_property NAME $cur_design]
   }
   puts "INFO: Constructing design in IPI design <$cur_design>..."

} elseif { ${cur_design} ne "" && $list_cells ne "" && $cur_design eq $design_name } {
   # USE CASES:
   #    5) Current design opened AND has components AND same names.

   set errMsg "ERROR: Design <$design_name> already exists in your project, please set the variable <design_name> to another value."
   set nRet 1
} elseif { [get_files -quiet ${design_name}.bd] ne "" } {
   # USE CASES: 
   #    6) Current opened design, has components, but diff names, design_name exists in project.
   #    7) No opened design, design_name exists in project.

   set errMsg "ERROR: Design <$design_name> already exists in your project, please set the variable <design_name> to another value."
   set nRet 2

} else {
   # USE CASES:
   #    8) No opened design, design_name not in project.
   #    9) Current opened design, has components, but diff names, design_name not in project.

   puts "INFO: Currently there is no design <$design_name> in project, so creating one..."

   create_bd_design $design_name

   puts "INFO: Making design <$design_name> as current_bd_design."
   current_bd_design $design_name

}

puts "INFO: Currently the variable <design_name> is equal to \"$design_name\"."

if { $nRet != 0 } {
   puts $errMsg
   return $nRet
}

##################################################################
# DESIGN PROCs
##################################################################



# Procedure to create entire design; Provide argument to make
# procedure reusable. If parentCell is "", will use root.
proc create_root_design { parentCell } {

  if { $parentCell eq "" } {
     set parentCell [get_bd_cells /]
  }

  # Get object for parentCell
  set parentObj [get_bd_cells $parentCell]
  if { $parentObj == "" } {
     puts "ERROR: Unable to find parent cell <$parentCell>!"
     return
  }

  # Make sure parentObj is hier blk
  set parentType [get_property TYPE $parentObj]
  if { $parentType ne "hier" } {
     puts "ERROR: Parent <$parentObj> has TYPE = <$parentType>. Expected to be <hier>."
     return
  }

  # Save current instance; Restore later
  set oldCurInst [current_bd_instance .]

  # Set parent object as current
  current_bd_instance $parentObj


  # Create interface ports
  set DDR [ create_bd_intf_port -mode Master -vlnv xilinx.com:interface:ddrx_rtl:1.0 DDR ]
  set FIXED_IO [ create_bd_intf_port -mode Master -vlnv xilinx.com:display_processing_system7:fixedio_rtl:1.0 FIXED_IO ]

  # Create ports

  # Create instance: RISCV_Rocket_Core_RV64G_0, and set properties
  set RISCV_Rocket_Core_RV64G_0 [ create_bd_cell -type ip -vlnv www.parallella.org:user:RISCV_Rocket_Core_RV64G:1.0 RISCV_Rocket_Core_RV64G_0 ]

  # Create instance: axi_mem_intercon_PS_master, and set properties
  set axi_mem_intercon_PS_master [ create_bd_cell -type ip -vlnv xilinx.com:ip:axi_interconnect: axi_mem_intercon_PS_master ]
  set_property -dict [ list \
CONFIG.NUM_MI {1} \
 ] $axi_mem_intercon_PS_master

  # Create instance: axi_mem_intercon_PS_slave, and set properties
  set axi_mem_intercon_PS_slave [ create_bd_cell -type ip -vlnv xilinx.com:ip:axi_interconnect: axi_mem_intercon_PS_slave ]
  set_property -dict [ list \
CONFIG.NUM_MI {1} \
 ] $axi_mem_intercon_PS_slave

  # Create instance: clk_wiz_0_100M_to_25M, and set properties
  set clk_wiz_0_100M_to_25M [ create_bd_cell -type ip -vlnv xilinx.com:ip:clk_wiz: clk_wiz_0_100M_to_25M ]
  set_property -dict [ list \
CONFIG.CLKOUT1_DRIVES {BUFG} \
CONFIG.CLKOUT1_JITTER {181.828} \
CONFIG.CLKOUT1_PHASE_ERROR {104.359} \
CONFIG.CLKOUT1_REQUESTED_OUT_FREQ {25.000} \
CONFIG.FEEDBACK_SOURCE {FDBK_AUTO} \
CONFIG.MMCM_CLKFBOUT_MULT_F {9.125} \
CONFIG.MMCM_CLKOUT0_DIVIDE_F {36.500} \
CONFIG.MMCM_DIVCLK_DIVIDE {1} \
CONFIG.PRIM_SOURCE {No_buffer} \
CONFIG.RESET_PORT {resetn} \
CONFIG.RESET_TYPE {ACTIVE_LOW} \
 ] $clk_wiz_0_100M_to_25M

  # Create instance: processing_system7_0, and set properties
  set processing_system7_0 [ create_bd_cell -type ip -vlnv xilinx.com:ip:processing_system7: processing_system7_0 ]
  set_property -dict [ list \
CONFIG.PCW_CORE0_FIQ_INTR {0} \
CONFIG.PCW_ENET0_ENET0_IO {MIO 16 .. 27} \
CONFIG.PCW_ENET0_GRP_MDIO_ENABLE {1} \
CONFIG.PCW_ENET0_PERIPHERAL_ENABLE {1} \
CONFIG.PCW_ENET1_PERIPHERAL_ENABLE {0} \
CONFIG.PCW_EN_CLK3_PORT {1} \
CONFIG.PCW_FPGA0_PERIPHERAL_FREQMHZ {100} \
CONFIG.PCW_FPGA3_PERIPHERAL_FREQMHZ {100} \
CONFIG.PCW_GPIO_EMIO_GPIO_ENABLE {1} \
CONFIG.PCW_GPIO_MIO_GPIO_ENABLE {1} \
CONFIG.PCW_GPIO_MIO_GPIO_IO {MIO} \
CONFIG.PCW_I2C0_I2C0_IO {EMIO} \
CONFIG.PCW_I2C0_PERIPHERAL_ENABLE {1} \
CONFIG.PCW_I2C0_RESET_ENABLE {0} \
CONFIG.PCW_PRESET_BANK1_VOLTAGE {LVCMOS 1.8V} \
CONFIG.PCW_QSPI_GRP_SINGLE_SS_ENABLE {1} \
CONFIG.PCW_QSPI_PERIPHERAL_ENABLE {1} \
CONFIG.PCW_SD1_PERIPHERAL_ENABLE {1} \
CONFIG.PCW_SD1_SD1_IO {MIO 10 .. 15} \
CONFIG.PCW_SDIO_PERIPHERAL_FREQMHZ {50} \
CONFIG.PCW_UART1_PERIPHERAL_ENABLE {1} \
CONFIG.PCW_UART1_UART1_IO {MIO 8 .. 9} \
CONFIG.PCW_UIPARAM_DDR_BOARD_DELAY0 {0.434} \
CONFIG.PCW_UIPARAM_DDR_BOARD_DELAY1 {0.398} \
CONFIG.PCW_UIPARAM_DDR_BOARD_DELAY2 {0.410} \
CONFIG.PCW_UIPARAM_DDR_BOARD_DELAY3 {0.455} \
CONFIG.PCW_UIPARAM_DDR_CL {9} \
CONFIG.PCW_UIPARAM_DDR_CWL {9} \
CONFIG.PCW_UIPARAM_DDR_DEVICE_CAPACITY {8192 MBits} \
CONFIG.PCW_UIPARAM_DDR_DQS_TO_CLK_DELAY_0 {0.315} \
CONFIG.PCW_UIPARAM_DDR_DQS_TO_CLK_DELAY_1 {0.391} \
CONFIG.PCW_UIPARAM_DDR_DQS_TO_CLK_DELAY_2 {0.374} \
CONFIG.PCW_UIPARAM_DDR_DQS_TO_CLK_DELAY_3 {0.271} \
CONFIG.PCW_UIPARAM_DDR_DRAM_WIDTH {32 Bits} \
CONFIG.PCW_UIPARAM_DDR_FREQ_MHZ {400.00} \
CONFIG.PCW_UIPARAM_DDR_PARTNO {Custom} \
CONFIG.PCW_UIPARAM_DDR_T_FAW {50} \
CONFIG.PCW_UIPARAM_DDR_T_RAS_MIN {40} \
CONFIG.PCW_UIPARAM_DDR_T_RC {60} \
CONFIG.PCW_UIPARAM_DDR_T_RCD {9} \
CONFIG.PCW_UIPARAM_DDR_T_RP {9} \
CONFIG.PCW_UIPARAM_DDR_USE_INTERNAL_VREF {1} \
CONFIG.PCW_USB0_PERIPHERAL_ENABLE {1} \
CONFIG.PCW_USB0_RESET_ENABLE {0} \
CONFIG.PCW_USB1_PERIPHERAL_ENABLE {1} \
CONFIG.PCW_USE_FABRIC_INTERRUPT {1} \
CONFIG.PCW_USE_M_AXI_GP0 {1} \
CONFIG.PCW_USE_S_AXI_HP0 {1} \
 ] $processing_system7_0

  # Create instance: rst_processing_system7_0_25M, and set properties
  set rst_processing_system7_0_25M [ create_bd_cell -type ip -vlnv xilinx.com:ip:proc_sys_reset: rst_processing_system7_0_25M ]

  # Create interface connections
  connect_bd_intf_net -intf_net RISCV_Rocket_Core_RV64G_0_M_AXI [get_bd_intf_pins RISCV_Rocket_Core_RV64G_0/m_axi] [get_bd_intf_pins axi_mem_intercon_PS_slave/S00_AXI]
  connect_bd_intf_net -intf_net axi_mem_intercon_1_M00_AXI [get_bd_intf_pins axi_mem_intercon_PS_slave/M00_AXI] [get_bd_intf_pins processing_system7_0/S_AXI_HP0]
  connect_bd_intf_net -intf_net axi_mem_intercon_M00_AXI [get_bd_intf_pins RISCV_Rocket_Core_RV64G_0/s_axi] [get_bd_intf_pins axi_mem_intercon_PS_master/M00_AXI]
  connect_bd_intf_net -intf_net processing_system7_0_DDR [get_bd_intf_ports DDR] [get_bd_intf_pins processing_system7_0/DDR]
  connect_bd_intf_net -intf_net processing_system7_0_FIXED_IO [get_bd_intf_ports FIXED_IO] [get_bd_intf_pins processing_system7_0/FIXED_IO]
  connect_bd_intf_net -intf_net processing_system7_0_M_AXI_GP0 [get_bd_intf_pins axi_mem_intercon_PS_master/S00_AXI] [get_bd_intf_pins processing_system7_0/M_AXI_GP0]

  # Create port connections
  connect_bd_net -net clk_wiz_0_clk_out1 [get_bd_pins RISCV_Rocket_Core_RV64G_0/m_axi_aclk] [get_bd_pins RISCV_Rocket_Core_RV64G_0/s_axi_aclk] [get_bd_pins axi_mem_intercon_PS_master/ACLK] [get_bd_pins axi_mem_intercon_PS_master/M00_ACLK] [get_bd_pins axi_mem_intercon_PS_master/S00_ACLK] [get_bd_pins axi_mem_intercon_PS_slave/ACLK] [get_bd_pins axi_mem_intercon_PS_slave/M00_ACLK] [get_bd_pins axi_mem_intercon_PS_slave/S00_ACLK] [get_bd_pins clk_wiz_0_100M_to_25M/clk_out1] [get_bd_pins processing_system7_0/M_AXI_GP0_ACLK] [get_bd_pins processing_system7_0/S_AXI_HP0_ACLK] [get_bd_pins rst_processing_system7_0_25M/slowest_sync_clk]
  connect_bd_net -net clk_wiz_0_locked [get_bd_pins clk_wiz_0_100M_to_25M/locked] [get_bd_pins rst_processing_system7_0_25M/dcm_locked]
  connect_bd_net -net processing_system7_0_FCLK_CLK3 [get_bd_pins clk_wiz_0_100M_to_25M/clk_in1] [get_bd_pins processing_system7_0/FCLK_CLK3]
  connect_bd_net -net processing_system7_0_FCLK_RESET0_N [get_bd_pins clk_wiz_0_100M_to_25M/resetn] [get_bd_pins processing_system7_0/FCLK_RESET0_N] [get_bd_pins rst_processing_system7_0_25M/ext_reset_in]
  connect_bd_net -net rst_processing_system7_0_25M_interconnect_aresetn [get_bd_pins axi_mem_intercon_PS_master/ARESETN] [get_bd_pins axi_mem_intercon_PS_slave/ARESETN] [get_bd_pins rst_processing_system7_0_25M/interconnect_aresetn]
  connect_bd_net -net rst_processing_system7_0_25M_peripheral_aresetn [get_bd_pins RISCV_Rocket_Core_RV64G_0/m_axi_aresetn] [get_bd_pins RISCV_Rocket_Core_RV64G_0/s_axi_aresetn] [get_bd_pins axi_mem_intercon_PS_master/M00_ARESETN] [get_bd_pins axi_mem_intercon_PS_master/S00_ARESETN] [get_bd_pins axi_mem_intercon_PS_slave/M00_ARESETN] [get_bd_pins axi_mem_intercon_PS_slave/S00_ARESETN] [get_bd_pins rst_processing_system7_0_25M/peripheral_aresetn]

  # Create address segments
  create_bd_addr_seg -range 0x40000000 -offset 0x0 [get_bd_addr_spaces RISCV_Rocket_Core_RV64G_0/m_axi] [get_bd_addr_segs processing_system7_0/S_AXI_HP0/HP0_DDR_LOWOCM] SEG_processing_system7_0_HP0_DDR_LOWOCM
  create_bd_addr_seg -range 0x1000 -offset 0x43C00000 [get_bd_addr_spaces processing_system7_0/Data] [get_bd_addr_segs RISCV_Rocket_Core_RV64G_0/s_axi/axi_lite] SEG_RISCV_Rocket_Core_RV64G_0_axi_lite


  # Restore current instance
  current_bd_instance $oldCurInst

  save_bd_design
}
# End of create_root_design()


##################################################################
# MAIN FLOW
##################################################################

create_root_design ""

Wrapper Bash Script

Now that we have our Tcl script ready we can execute it with Vivado with a simple bash script. The last part with bootgen just converts the bitstream from a bit format into raw binary form that Parallella's U-Boot residing in its flash memory uses in order to programm the FPGA of the Zynq device on startup. The end result is the parallella.bit.bin that you should place in the boot partition of your SD card.

#!/bin/bash

VIVADO_PATH=/opt/Xilinx/Vivado
echo "Vivado path set to ${VIVADO_PATH}"

VIVADO_VERSION=2015.4
echo "Vivado version set to ${VIVADO_VERSION}"

source ${VIVADO_PATH}/${VIVADO_VERSION}/settings64.sh

vivado -mode tcl -source package.riscv.tcl

cp parallella_riscv/system.runs/impl_1/system_wrapper.bit parallella.bit
bootgen -image bit2bin.bif -w on -split bin
rm bit2bin.bin parallella.bit


Source Code and Prebuilt Binaries

This concludes our automation script and you are now ready to try it yourself by changing your Vivado version on the top of the system.bd.tcl file and on the bash script that executes the Tcl within Vivado. You can find all of the parts described in this post along with the Verilog RTL source code for RISC-V RV64G core here.

In the archive's ./sd/ folder I also provide a prebuilt parallella.bit.bin bitstream along with a devicetree.dtb (Device Tree Blob) that registers the necessary memory areas in the Linux kernel. These two files should be placed in the boot partition of your SD card. Please make a copy of the original files in your SD card so that you can restore them if needed.

Finally the archive contains inside the ./sd/riscv/ folder a prebuilt fesvr (frontend server), pk (proxy kernel) and a RISC-V hello world executable to test everything wihout the need to built the complete RISC-V toolchain (as discussed in this post). Copy this folder anywhere you wish in your root partition (e.g /home/parallella/).

Note: As described in this great tutorial repository by Kirill888 (I learned a lot, thank you!), since our new RISC-V bitstream does not contain the e-Link interface, once you boot Parallella, any attempt to interface with Epiphany chip will cause kernel crash, as e-Link driver tries to write to addresses that do not exist. We will therefore need to shutdown the parallella thermal service, as soon as possible after booting Parallella with the new bitstream for testing (better yet deactivate its service before booting with the new bitstream). This has been resolved with recent Epiphany driver versions and the driver checks whether E-Link is functional before attempting anything with the Epiphany chip. However I think it's better to do it anyways:

sudo service parallella-thermald stop

To test everything execute the following:

# Switch to where you placed the archive's ./sd/riscv/ folder
cd path/to/archives/folder/sd/riscv

# Copy fesvr's shared library in your system
sudo cp libfesvr.so /usr/local/lib
sudo ldconfig

# The moment of truth!
sudo ./fesvr pk hello

If all went well you should see the all familiar hello world message. Congratulations you now have a RISC-V RV64G core running on your Parallella board!

Written by Elias Kouskoumvekakis on Monday June 13, 2016

Permalink - Tag: gsoc2016 - Category: news

« Packaging the RISC-V rocket core with Vivado GUI - Week 2 of GSoC 2016 - Building Parallella RISC-V FPGA bitstreams and software with make - Week 4 of GSoC 2016 »

comments powered by Disqus