The precious Parallella board (Kickstarter edition) arrived this week and I've been playing with it non-stop. This board contains the bigger Zynq FPGA device (7020) and thus it is capable of holding large designs like the full RISC-V IMAFD core generated with rocket-chip. Before testing my project though I had to make sure that the board works fine with just the latest Parallella E-SDK (2016.3.1) image loaded in the SD card. In this release page of the E-SDK you can find image archives for both Zynq FPGA devices: the 7010 (Desktop / Microserver editions) and the 7020 (Embedded / Kickstarter editions). These images actually contain a headless Linaro (Ubuntu) 15.04 distribution (root filesystem plus Linux kernel) along with the necessary boot files (FPGA bitstream, device tree file).
I am almost certain that since my design had changed Zynq's PS configuration, a new FSBL would be needed. This is due to the addition of extra AXI GP & HPGP ports separate from those that the Parallella Base AXI inteconnect(s) use, since these need to run at a different clock speed (100MHz) than the RISC-V core (50 MHz for RV64IMA or 25 MHz for RV64IMAFD).
Building the FSBL with correct initialization of the new AXI ports and clock is trivial. Unfortunately using it in Parallella is not possible for the end-user without some way to reprogram the flash. I for example have the Xilinx Platform Cable USB II (red box) to do JTAG programming lying around but I don't have a way to connect it to Parallella's JTAG port since it has no pins (the Parallella Porcupine board was made exactly for this purpose, besides easier GPIO). I also have a USB - Serial TTL cable to connect to the 3 serial pins present on Parallella but I don't know if this way of programming the on-board flash is safe and I'm not comfortable doing this with the only Parallella board in my (temporary) possession (courtesy of Philipp - thank you!).
UPDATE: In the end I opted to use an MMCM clock manager to drive the RISC-V clock and thus no extra clock change was needed in the Zynq PS. The extra PS AXI ports as I suspected are always enabled without updating the FSBL. So no FSBL update is needed and thus no Parallella re-flashing (lucky boards - they will live to tell the story of how they handled the mighty RISC-V core!). I updated the previous design post and keep the discussion in this post as a reference for anyone wishing to accomplish what I eventually didn't.
The Parallella RISC-V FPGA design consists of two major components. The first is the Parallella Base component connected to the ARM cores via AXI4. The Parallella Base component contains the
E-Link needed for communication with the
Epiphany chip on-board Parallella along with
GPIO single ended passthrough (PL <-> PS connections) and
I2C bus connection to the on-board power regulators that power mangage the Epiphany chip. Thus the bitstreams produced have identical functionality with those provided by an umodified Parallella.
The other is of-course the RISC-V RV64 core which is generated using the rocket-chip generator. The design supports both the RV64IMA and RV64IMAFD RISC-V architectures. The former is used on the smaller core which doesn't contain an FPU while the latter on the bigger core which does contain an FPU and thus it can execute single or double floating point instructions natively. The default selected core is the smaller RV64IMA so that it can fit on all Parallella editions, regardless of the Zynq FPGA device size they contain.
Make is one of the most common build tools used to write recipes on how you want your software (or hardware) to be built. Although for pure software projects I prefer CMake over Make, for the parallella-riscv port I decided to use the latter since besides building the software we can leverage its power to also generate the FPGA bitstream and everything else needed. Make is a very versatile tool and can be used for all sorts of automation tasks in a project. This post describes some of the make infrastructure that I wrote in order to accomplish the project's build tasks.
As promised in my last post I will now explain how to automate Vivado with Tcl scripts in order to built a RISC-V RV64G core bitstream for the Parallella board. Such scripts are great because unlike the steps performed by the designer in the GUI, they can be automated and incorporated into the command line flow of the user. Such a command line flow, although having an initial learning curve, is quicker, more robust and generally has reproducible results. Moreover the Tcl scripts can be easily commited into a VCS repository with all the benefits that this entails.
UPDATE: If you read this before July 15th, please re-copy the scripts or re-download the archive with all the necessary files discussed in the post.
Tcl scripting is very easy since Tcl is a language designed for easier tool control and it wouldn't make sense if it was difficult to write (warning: this is not a universal truth in computing!). However one must know the tool(s) that the script will use in order to make anything usefull with it. Luckily all of the Vivado tools when run in GUI mode produce in the Tcl console (found at one of the bottom windows) equivalent Tcl commands with those the user executed in the GUI. So until you learn all of the commands needed (they are literally hundreds) you can just perform GUI actions and log them somewhere for future reference when writing your Tcl scripts.
Packaging either a new or an existing IP with Vivado is really simple and in this post I will show you how to package the RISC-V RV64G rocket core I produced with the rocket-chip generator. I am using the Vivado 2015.4 but this short tutorial will (probably!) work with any tool version you might have as long as you are able to make slight changes based on what you see on your tool.
UPDATE: If you read this before July 15th, please re-download the Verilog RTL sources or the complete final IP archive since they now contain important fixes.
As you can see from the plethora of screenshots that follow this is a graphical way of packaging your IP using the Vivado GUI. A preferable method would be to create Vivado Tcl scripts so that you can integrate this IP packaging in your own scripted flow, interoperate with other tools you might use etc. For the GSoC project I work on I choose to do both GUI and Tcl packaging. I chose to first use the Vivado GUI for learning the needed Tcl commands since the Tcl console on the bottom of the GUI provides all the commands needed to produce Tcl scripts with identical functionality when run inside a script which is one of Vivado's great strenghts. One can then place these commands inside a script for actual inclusion in a project and commiting this script to the project's Github repository, something that Il try to explain in a future post.