Meaning of set_input_delay and set_output_delay in SDC timing constraints

This post was written by eli on April 6, 2017
Posted Under: Altera,FPGA,Vivado

Introduction

Synopsys Design Constraints (SDC) has been adopted by Xilinx (in Vivado, as .xdc files) as well as Altera (in Quartus, as .sdc files) and other FPGA vendors as well. Despite the wide use of this format, there seems to be some confusion regarding the constraints for defining I/O timing.

This post is defines what they mean, and then shows the timing calculations made by Vivado and Quartus (in separate pages), demonstrating their meaning when implementing a very simple example design. So there’s no need to take my word for it, and this also gives a direction on how to check that your own constraints did what they were supposed to do.

There are several options to these constraints, but these are documented elsewhere. This post is about the basics.

And yes, it’s the same format with Xilinx and Altera. Compatibility. Unbelievable, but true.

What they mean

In short,

  • set_input_delay -clock … -max … : The maximal clock-to-output of the driving chip + board propagation delay
  • set_input_delay -clock … -min … : The minimal clock-to-output of the driving chip. If not given, choose zero (maybe a future revision of the driving chip will be manufactured with a really fast process)
  • set_output_delay -clock … -max … : The t_setup time of the receiving chip + board propagation delay
  • set_output_delay -clock … -min … : Minus the t_hold time of the receiving chip (e.g. set to -1 if the hold time is 1 ns).

Note that if neither -min or -max are given, it’s like two assignments, one with -min and one with -max. In other words: Poor constraining.

Always constraint both min and max

It may seem meaningless to use the min/max constraints. For example, using a catch-both single set_output_delay sets the setup time correctly, and the hold time to a negative value which is incorrect, but why bother? It allows the output port to toggle before the clock, but that couldn’t happen, could it?

Well, actually it can. For example, it’s quite common to let an FPGA PLL (or alike) generate the internal FPGA clock from the clock at some input pin (the “clock on the board”). This allows the PLL to align the clock on the FPGA’s internal clock network to the input clock, by time-shifting it slightly to compensate for the delay of the clock distribution network.

Actually, the implementation tools may feel free to shift the clock to slightly earlier than the clock input, in order to meet timing better: A slow path from logic to output may violate the maximal delay allowed from clock to output. Moving the clock earlier fixes this. But moving the internal clock to earlier than the clock on the board may switch other outputs that depend on the same clock to before the clock on the board toggles, leading to hold time violations on the receiver of these outputs. Nothing prevents this from happening, except a min output delay constraint.

Outline of example design

We’ll assume test_clk input clock, test_in input pin, and test_out output, with the following relationship:

   always @(posedge test_clk)
     begin
	test_samp <= test_in;
	test_out <= test_samp;
     end

No PLL is used to align the internal clock with the board’s test_clk, so there’s a significant clock delay.

And the following timing constraints applied in the SDC/XDC file:

create_clock -name theclk -period 20 [get_ports test_clk]
set_output_delay -clock theclk -max 8 [get_ports test_out]
set_output_delay -clock theclk -min -3 [get_ports test_out]
set_input_delay -clock theclk -max 4 [get_ports test_in]
set_input_delay -clock theclk -min 2 [get_ports test_in]

As the tools’ timing calculations are rather long, they are on separate pages:

Add a Comment

required, use real name
required, will not be published
optional, your blog address