01signal.com

I/O timing constraints in SDC syntax

This page belongs to a series of pages about timing. The previous pages explained the theory behind timing calculations, showed how to write several timing constraints and discussed the principles of timing closure. The previous page explained a few fundamental principles regarding I/O timing constraints. This page continues with the practical aspects of this topic.

Introduction

The purpose of I/O timing constraints is to ensure a reliable interface with the outer world: They guarantee that each signal from the outer world arrives reliably to the relevant flip-flop on the FPGA. Likewise, they also make certain that each signal from the FPGA to the outer world arrives reliably to the flip-flop on the external component.

I/O timing constraints are the most difficult kind of timing constraints: Some of the timing parameters depend on the external electronic components on the PCB (Printed Circuit Board). It's usually necessary to read the datasheets of these external components in order to determine the correct timing requirements. Often, a pen-and-paper calculation is required in order to obtain the correct timing constraint.

It's tempting to skip this complicated task and prefer a simple alternative: Trial and error. This shortcut consists of letting the tools do what they want, and see if it works. If there is a problem with an input port, use the opposite clock edge on the flip-flop that receives the input signal. Likewise, if an output doesn't work well, use the opposite clock edge on the output flip-flop. This way usually gets the electronics working quick and simple.

The problem with this approach is that the timing behavior changes along with the temperature. There are also uncertainties because of the manufacturing process of semiconductor components. This true for the FPGA as well as the external electronics. So improper timing constraints can lead to a "Black Magic Mode". This is true for all timing constraints, but this happens more with I/O timing constraints.

The worst thing with skipping the pen-and-paper calculations is that sometimes it's impossible to guarantee the timing requirements because of the board design. If a situation like this is discovered during the process of the board design, there is often a simple solution (changing the wiring to the FPGA or rethinking the distribution of clocks). But if a flaw of this sort is discovered after the PCB is produced, there might be no way to fix it. In other words, it becomes impossible to guarantee that the electronics works reliably.

What's in this page

This page outlines the basic timing constraints for I/O ports. The syntax that is shown here is SDC, which is used by Vivado and Quartus, as well as other FPGA tools.

This page begins with the timing constraints that are dedicated to I/O: set_input_delay and set_output_delay. The meaning of these constraints is explained. This is followed by a reference to two separate pages that show examples of timing reports by Vivado and Quartus.

It's also possible to define timing constraints with set_max_delay and set_min_delay. These commands are more suitable in some scenarios. We have already met them regarding paths inside the FPGA. Their meaning as I/O timing constraints is also explained below.

This page discusses only the technical aspects about these I/O timing constraints. For the theoretical part, refer to the previous page, which also shows how to define false paths for I/O ports.

Meaning of set_input_delay and set_output_delay

These two commands are suitable when the interface with the external component is system synchronous. In short,

It's important to note that these definitions are correct only if these two conditions are met:

These two conditions are necessary to ensure that the clock delays are calculated correctly.

Also note that if neither -min or -max is used, the command is interpreted as two commands: One command with -min, and a second command with -max. This is probably not what you want.

The definitions of these commands are a little confusing: set_input_delay defines when the data signal is allowed to change its value after a clock edge. But set_output_delay defines when a clock edge is allowed after the data signal has changed its value. Presumably, the rationale behind these definitions is that the numbers from the datasheet can be used directly in the timing constraints.

The set_input_delay and set_output_delay commands have several options which are not covered here. In particular, the falling clock edge can be chosen as the time reference. Refer to the tools' documentation for more information.

Always use both min and max

It may seem pointless to insist on using both -min and -max for every timing constraint. For example, If the tsetup of the external component is 8 ns, what's wrong with this?

set_output_delay -clock theclk 8 [get_ports test_out]

This defines the setup time correctly. As for the hold time, it is unintentionally defined as –8 ns. This allows the output port to change its value 8 ns before the clock. But who cares? That couldn’t happen, could it?

Well, actually it can. I have already discussed the usage of a PLL to generate the internal clock, based upon a clock from an input pin (i.e. the clock that is visible on the board). This allows the PLL to align the FPGA’s internal clock with the input clock. The PLL does this by moving (shifting) the clock slightly to compensate for the delay of the clock distribution network.

Actually, the FPGA tools may feel free to move the clock to slightly earlier than the board's clock, in order to achieve a timing constraint: If the clock inside the FPGA is moved to before the external clock, the clock-to-output that is perceived by the external component becomes smaller. This is because the flip-flop inside the FPGA is synchronous with the internal clock, but the visible timing is relative to the external clock.

But when the FPGA's internal clock is earlier than the clock on the board, the FPGA's output can change before the clock edge of the external clock. This can lead to a violation of hold time on the component that receives these outputs.

If the set_output_delay command defines the hold time as –8 ns, it doesn't mean that the output will change its value 8 ns before the clock. But this allows the tools to move the internal clock in a way that violates the thold requirement. Using set_output_delay with -min correctly prevents this from happening.

Adjustments due to trace delay

It's important to remember that the tools don't take the PCB's trace delay into account. The tools don't have this information. Hence they assume that this delay is zero when making the timing calculations for set_input_delay and set_output_delay. The correction is to add the trace delay to the datasheet's values of the clock-to-output and tsu.

It might also be necessary to take the clock skew into consideration: On a perfect PCB, the clock arrives to all components with the same delay. In real life, there is a possible clock skew between the FPGA and the external component. Such clock skew is not taken into account in the tools' timing calculations.

Hence, if the clock arrives earlier to the FPGA (relative to the external component), the following corrections are necessary:

Likewise, if the clock arrives later to the FPGA, the following corrections are necessary:

Note that the timing report may display a clock skew that is not zero, regardless of the adjustments that are outlined here. However the clock skew that appears in the timing report relates to the clock delays inside the FPGA, and not on the PCB.

Examples of timing reports

The examples are based upon the following Verilog code:

module top(
    input test_clk,
    input test_in,
    output reg test_out
);

   reg test_samp;

   always @(posedge test_clk)
     begin
	test_samp <= test_in;
	test_out <= test_samp;
     end
endmodule

@test_clk is the input clock, @test_in is an input pin, and @test_out is an output pin. Note that no PLL is used to align the internal clock with the board's clock, so there's a significant clock delay.

The timing constraints are as follows:

create_clock -name theclk -period 20 [get_ports test_clk]
set_output_delay -clock theclk -max 8 [get_ports test_out]
set_output_delay -clock theclk -min -3 [get_ports test_out]
set_input_delay -clock theclk -max 4 [get_ports test_in]
set_input_delay -clock theclk -min 2 [get_ports test_in]

As the timing reports are rather long, they are shown on separate pages:

Using set_max_delay and set_min_delay

When the interface with the external component is source synchronous, the use of set_input_delay and set_output_delay is less natural. set_max_delay and set_min_delay are more suitable for this situation. In a previous page, these two commands were mentioned only as supplements or adjustments (timing exceptions) to clock period constraints. All paths were internal: They started and ended at a sequential element. When these commands are used as I/O timing constraints, either the beginning or the end of the path is an I/O port. How is the timing analysis done in this situation?

The truth is, that it's often pointless to delve into the timing analysis of these commands: Their purpose is usually to restrict the tool's behavior by writing timing constraints the tools can barely fulfill. The values in these timing constraints are therefore found by repeated attempt to make these constraints stricter. With this methodology, the timing analysis itself has no importance.

That said, it's still a good idea to understand the calculations behind set_max_delay and set_min_delay:

Recall from before, that a timing analysis has two parts: The first part is the source path: It calculates the time from a clock edge (at the external clock pin) to an updated and valid value at the data input of the second flip-flop. This part is the sum of three elements:

  1. The time that it takes for the clock edge to reach the first flip-flop (the clock path)
  2. The time it takes this flip-flop to update its value
  3. The time it takes for this new value to reach the second flip-flop

The second part is the destination path, which consists only of the time it takes for the clock edge to reach the second flip-flop. We already know when this flip-flop's input is updated (from the source path), so the time difference can be compared with the required tsu or thold, as applicable.

But that was true with two sequential elements. What happens when one of the sides is an I/O port? For the sake of timing analysis, the port is treated as if it was an imaginary flip-flop. The clock path delay to this flip-flop is zero.

Let's consider the usual situation, where the clock is defined with a create_clock command that relies on get_ports (as shown in almost all my examples). A clock path delay that is zero means that this imaginary flip-flop's clock input is connected directly to the clock pin. Hence there is no delay between the clock pin and this imaginary flip-flop.

All timing parameters of this flip-flop are zero: The tsu, the thold and the clock-to-output. This doesn't reflect any realistic electronic component, but it gives set_max_delay and set_min_delay a meaning when they are used with an output port: The port's clock-to-output. For example:

set_max_delay -to [get_ports test_out] 7
set_min_delay -to [get_ports test_out] 0

These two timing constraints require that @test_out's clock-to-output is between 0 ns and 7 ns.

Let's explain why: Recall that normally, a set_max_delay command is similar to a period constraint for paths between specific flip-flops. So what happens with the Destination Clock Path? The calculation begins at the time of the second clock edge, i.e. at 7 ns. But the clock path delay to the second flip-flop is zero, and the tsu of this flip-flop is also zero. So the result of the calculation of the Destination Clock Path is just 7 ns. This is the maximum allowed for the Source Path, which is calculated as usual: The Source Clock Path plus the Data Path. In summary, the requirement is that the data output is valid 7 ns after the first clock edge. This is exactly the definition of clock-to-output of the output port. If the create_clock command for the relevant clock was based upon get_ports, this clock-to-output is relative to the clock on the PCB.

See the example of timing reports with Vivado.

Note that set_output_delay relates to the tsu or thold of the external component. set_max_delay defines the clock-to-output of the FPGA's output port. So the main difference between these two options is where the focus is.

Regarding an input port, there is no intuitive explanation to what set_max_delay and set_min_delay mean: The Source Path consists of the delay between the input pin and the data input of the flip-flop that receives the input signal. The Destination Clock Path starts at the time that is specified in the timing constraint command. The clock path delay is added to this time. These are meaningless calculations (see the timing reports). It's more natural to use set_input_delay, which relates to the clock-to-output of the external component.

Note that the timing analysis that is made on behalf of set_max_delay and set_min_delay doesn't depend on the clock period. Hence if the clock's frequency is changed, the same numbers are used while the tools enforce these constraints. By contrast, the calculations for set_input_delay and set_output_delay depend of the clock's frequency.

A dependency of the I/O timing constraints on the clock's frequency can be an advantage or a disadvantage, depending on the circumstances. If the timing constraints are written based upon the timing parameters of the external component (and the interface is system synchronous), it's probably better to rely on set_input_delay and set_output_delay: These constraints will remain correct even when the clock's frequency changes. However, when the intention of the timing constraints is to force the tools to make certain choices (e.g. to use IOB registers), set_max_delay and set_min_delay are more likely to be suitable.

Using -datapath_only

One possible motivation for a timing constraint is to ensure that the FPGA tools do whatever it takes to achieve the minimal possible delay to or from the I/O port. This usually means to use the IOB register. This can also mean avoiding the insertion of an extra delay between an input port and the flip-flop (the tools may do in order to meet the thold requirement with a better margin).

When a timing constraint is used for this purpose, there is no specific delay that is a goal. The idea is to prevent the tools from doing anything else than achieving the best possible result. If the FPGA tools support -datapath_only, it's better to use set_max_delay with this option. This completely eliminates the clock delay path from the calculation, so only the delay between the I/O port and the flip-flop is taken into account. This way, the timing constraint's requirement accurately corresponds with its purpose: To control the delay between the flip-flop and the I/O pin.

This is a simple example for Vivado:

set_max_delay -datapath_only -from [get_ports test_in] 2
set_max_delay -datapath_only -from [all_registers] \
   -to [get_ports test_out] 3

But what's the purpose of the part that says "-from [all_registers]"? Why is there a need for a "-from"? The short answer is that Vivado refused to accept this command without a "-from" part. There was no similar requirement on the command regarding the input port.

The timing reports with datapath_only are at the bottom of the page with the examples.

Summary

set_input_delay and set_output_delay are often considered as the preferred commands for I/O timing constraints. Indeed, this is usually the correct choice when the interface is system synchronous. In other scenarios it may be worth to consider using set_max_delay and set_min_delay instead, as they may better reflect the required limitations on the I/O port's timing.


This page concludes this series of pages about timing. But there's one final page which summarizes many of the topics in a way that is convenient for inspecting an existing design.

Copyright © 2021-2024. All rights reserved. (6f913017)