General
These are my notes as I set up the jitter attenuator devices (Silicon Labs’ Si5324 and Si5328) on Xilinx development boards as clean clock sources for use with MGTs. As such, this post isn’t all that organized.
I suppose that the reason Xilinx put a jitter attenuator to drive the SFP+ related reference clock, rather than just a plain clock generator, is first and foremost because it can be used as a high-quality clock source anyhow. On top of that, it may sync on a reference clock that might be derived from some other source. In particular, this allows syncing the SFP+ module’s data rate with the data rate of an external data stream.
But it’s a bit of a headache to set up. But if you’re into FPGAs, you should be used to this already.
Jots on random topics
- To get the register settings, run DSPLLsim, a tool issued by Silicon Labs (part of Precision Clock EVB Software). There is no need for any evaluation board to run this tool for the sake of obtaining register values, and neither is there any need to install the drivers it suggests to install during the software’s installation.
- Note that there’s an outline for the settings to make with DSPLLsim in a separate section below, which somewhat overlaps these jots.
- Enable Free Run Mode = the crystal, which is connected to XA/XB is routed to CKIN2. So there’s no need to feed the device with a clock, but instead set the reference clock to CKIN2. That works as a 114.285 MHz (± 20 ppm) reference clock on all Xilinx boards I’ve seen so far, including KC705, ZC706, KCU105, ZCU106, KCU116, VC707, VC709, VCU108 and VCU118). This is also the frequency recommended in the Silicon Lab’s docs for a crystal reference.
- The clock outputs should be set to LVDS. This is how they’re treated on Xilinx’ boards.
- f3 is the frequency at the input of the phase detector (i.e. the reference clock after division). The higher f3, the lower jitter.
- The chip’s CMODE is tied to GND, so the interface is I2C.
- RATE0 and RATE1 are set to MM by not connecting them, so the internal pull-up-and-down sets them to this mode. This setting requires a 3rd overtone crystal with a 114.285 MHz frequency, which is exactly what’s in the board (see Si53xx Reference Manual, Appendix A, Table 50).
- The device’s I2C address depends on the A2-A0 pins, which are all tied to GND on the boards I’ve bothered to check. The 7-bit I2C address is { 4b’1101, A[2:0] }. Since A=0, it’s 7-bit address 0x68, or 0xD0 / 0xD1 in 8-bit representation.
- Except for SCL, SDA, INT_C1B and RST_B, there are no control pins connected to the FPGA. Hence all other outputs can be configure to be off. RST_B is an input to reset the device. INT_C1B indicates problems with CKIN1 (only) which is useless in Free Run Mode, or may indicate an interrupt condition, which is more useful (but not really helpful due to the catch explained a few bullets down).
- Hence be INT_PIN and CK1_BAD_PIN (bits 0 and 2 in register 20) should be ’1′, so that INT_C1B functions as an interrupt pin. The polarity of this pin is set by bit 0 of register 22 (INT_POL). Also set LOL_MSK to ’1′ (bit 0 of register 24), so that INT_C1B is asserted until the PLL is locked. This might take several seconds if the loop bandwidth is narrow (which is good for jitter), so the FPGA should monitor this pin and hold the CPLL in reset until the reference clock is stable.
- There is a catch, though: The INT_C1B pin reflects the value of the internal register LOL_FLG (given the outlined register setting), which latches a Loss Of Lock, and resets it only when zero is written to bit 0 of register 132. Or so I understand from the datasheet’s description for register 132, saying “Held version of LOL_INT. [ ... ] Flag cleared by writing 0 to this bit.” Also from Si53xx Reference Manual, section 6.11.9: “Once an interrupt flag bit is set, it will remain high until the register location is written with a ’0′ to clear the flag”. Hence the FPGA must continuously write to this register, or INT_C1B will be asserted forever. This could have been avoided by sensing the LOL output, however it’s not connected to the FPGA.
- Alternatively, one can poll the LOL_INT flag directly, as bit 0 in register 130. If the I2C bus is going to be continuously busy until lock is achieved, one might as well read the non-latched version directly.
- To get an idea of lock times, take a look on Silicon Labs’ AN803, where lock times of minutes are discussed as a normal thing. Also the possibility that lock indicator (LOL) might go on and off while acquiring lock, in particular with old devices.
- Even if the LOL signal wobbles, the output clock’s frequency is already correct, so it’s a matter of slight phase adjustments, made with a narrow bandwidth loop. So the clock is good enough to work with as soon as LOL has gone low for the first time, in particular in a case like mine, where the CPLL should be able to tolerate a SSC running between 0 and -5000 ppm at a 33 kHz rate.
- It gets better: Before accessing any I2C device on the board, the I2C switch (PCA9548A) must be programmed to let the signal through to the jitter attenuator: It wakes up with all paths off. The I2C switch’ own 7-bit address is { 4b’1110, A[2:0] }, where A[2:0] are its address pins. Check the respective board’s user guide for the actual address that has been set up.
- Checking the time to lock on an Si5324 with a the crystal as reference, it went all from 600 ms to 2000 ms. It seems like it depends on the temperature of something, the warmer the longer lock time: Turning off the board for a minute brings back the lower end of lock times. But I’ve also seen quite random lock times, so don’t rely on any figure nor that it’s shorter after powerup. Just check the LOL and be ready to wait.
- As for Si5328, lock times are around 20 seconds (!). Changing the loop bandwidth (with BWSEL) didn’t make any dramatic difference.
- I recommend looking at the last page of Silicon Lab’s guides for the “We make things simple” slogan.
Clock squelch during self calibration
According to the reference manual section 6.2.1, the user must set ICAL = 1 to initiate a self-calibration after writing new PLL settings with the I2C registers. This initiates a self calibration.
To avoid an output clock that is significantly off the desired frequency, SQ_ICAL (bit 4 in register 4) should be set to ’1′, and CKOUT_ALWAYS_ON (bit 5 in register 0) should be ’0′. “SQ_ICAL” stands for squelch the clock until ICAL is completed.
The documentation is vague on whether the clock is squelched until the internal calibration is done, or until the lock is complete (i.e. LOL goes low). The reference manual goes “if SQ_ICAL = 1, the output clocks are disabled during self-calibration and will appear after the self-calibration routine is completed”, so this sounds like it doesn’t wait for LOL to go low. However the truth table for CKOUT_ALWAYS_ON and SQ_ICAL (Table 28 in the reference manual, and it also appears in the data sheets) goes “CKOUT OFF until after the first successful ICAL (i.e., when LOL is low)”. Nice, huh?
And if it there is no clock when LOL is high, what if it wobbles a bit after the initial lock acquisition?
So I checked this up with a scope on an Si5324 locking on XA/XB, with SQ_ICAL=1 and CKOUT_ALWAYS_ON=0, and the output clock was squelched for about 400 μs, and then became active throughout 1.7 seconds of locking, during which LOL was low. So it seems like there’s no risk of the clock going on and off if LOL wobbles.
LOCKT
This parameter controls the lock detector, and wins the prize for the most confusing parameter.
It goes like this: The lock detector monitors the phase difference with the reference clock. If the phase difference looks fine for a certain period of time (determined by the LOCKT parameter), LOL goes low to indicate a successful lock. As a side note, since LOCKT is the time it takes to determine that nothing bad has happened with the phase, it’s the minimum amount of time that LOL will be high, as stated in the reference manual.
But then there’s Table 1 in AN803, showing how LOCKT influences the locking time. Like, really dramatically. From several seconds to a split second. How could a lock detector make such a difference? Note that LOCKT doesn’t have anything to do with the allowed phase difference, but just how long it needs to be fine until the lock detector is happy.
The answer lies in the fast lock mechanism (enabled by FAST_LOCK). It’s a wide bandwidth loop used to make the initial acquisition. When the lock detector indicates a lock, the loop is narrowed for the sake of achieving low jitter. So if the lock detector is too fast, switching to the low bandwidth loop occurs to early, and it will take a longer time to reach a proper lock. If it’s too slow (and hence fussy), it wastes time on the wide bandwidth phase.
In the end, the datasheet recommends a value of 0x1 (53 ms). This seems to be the correct fit-all balance.
Another thing that AN803 mentions briefly, is that LOL is forced high during the acquisition phase. Hence the lock detection, which switches the loop from acquisition (wide loop bandwidth) to low-jitter lock (narrow bandwidth) is invisible to the user, and for a good reason: The lock detector is likely to wobble at this initial stage, and there’s really no lock. The change made in January 2013, which is mentioned in this application note, was partly for producing a LOL signal that would make sense to the user.
DSPLLsim settings
Step by step for getting the register assignments to use with the device.
- After starting DPLLsim, select “Create a new frequency plan with free running mode enabled”
- In the next step, select the desired device (5328 or 5324).
- In the following screen, set CKIN1 to 114.285 MHz and CKOUT1 to the desired output frequency (125 MHz in my case). Also set the XA-XB input to 114.285 MHz. Never mind that we’re not going to work with CKOUT1 — this is just for the sake of calculating the dividers’ values.
- Click “Calculate ratio” and hope for some pretty figures.
- The number of outputs can remain 2.
- Click “Next”.
- Now there’s the minimal f3 to set. 16 kHz is a bit too optimistic. Reducing it to 14 kHz was enough for the 125 MHz case. Click “Next” again.
- Now a list of frequency plans appears. If there are several options, odds are that you want the one with the highest f3. If there’s just one, fine. Go for it: Select that line and click “Next” twice.
- This brings us to the “Other NCn_LS Values” screen, which allows setting the output divisor of the second clock output. For the same clock on both outputs, click “Next”.
- A summary of plan is given. Review and click “OK” to accept it. It will appear on the next window as well.
And this brings us to the complicated part: The actual register settings. This tool is intended to run against an evaluation module, and update its parameters from the computer. So some features won’t work, but that doesn’t matter: It’s the register’s values we’re after.
There are a few tabs:
- “General” tab: The defaults are OK: In particular, enable FASTLOCK, pick the narrowest bandwidth in range with BWSEL. The RATE attribute doesn’t make any difference (it’s set to MM by virtue of pins on the relevant devices). The Digital Hold feature is irrelevant, and so are its parameters (HIST_DEL and HIST_AVG). It’s only useful when the reference clock is shaky.
- “Input Clocks” tab: Here we need changes. Set AUTOSEL_REG to 0 (“Manual”) and CKSEL_REG to 1 for selecting CKIN2 (which is where the crustal is routed to). CLKSEL_PIN set to 0, so that the selection is set by registers and not from any pins. We want to stick to the crystal, no matter what happens on the other clock input. Other than that, leave the defauls: CK_ACTV_POL doesn’t matter, because the pin is unused, and same goes for CS_CA. BYPASS_REG is correctly ’0′ (or the input clock goes directly to output). LOCKT should remain 1 and and VALT isn’t so important in a crystal reference scenario (they set the lock detector’s behavior). The FOS feature isn’t used, so its parameters are irrelevant as well. Automatic clock priority is irrelevant, as manual mode is applied. If you want to be extra pedantic, set CLKINnRATE to 0x3 (“95 to 215 MHz”), which is the expected frequency, at least on CKIN2. And set FOSREFSEL to 0 for XA/XB. It won’t make any difference, as the FOS detector’s output is ignored.
- “Output Clocks” tab: Set the SFOUTn_REG to LVDS (0x7). Keep the defaults on the other settings. In particular, CKOUT_ALWAYS_ON unset and SQ_ICAL set ensures that a clock output is generated only after calibration. No hold logic is desired either.
- “Status” tab: Start with unchecking everything, and set LOSn_EN to 0 (disable). Then check only INT_PIN, CK1_BAD_PIN and LOL_MSK. This is discussed in the jots above. In particular, it’s important that none of the other *_MSK are checked, or the interrupt pin will be contaminated by other alarms. Or it’s unimportant, if you’re going to ignore this pin altogether, and poll LOL through the I2C interface.
The other two tabs (“Register Map” and “Pin Out”) are just for information. No settings there.
So now, the finale: Write the settings to a file. On the main menu go to Options > Save Register Map File… and, well. For extra confusion, the addresses are written in decimal, but the values in hex.
The somewhat surprising result is that the register setting for Si5324 and Si5328 are identical, even though the loop bandwidth is dramatically different: Even though both have BWSEL_REG set to 0x3, which is the narrowest bandwidth possible for either device, for Si5324 it means 8 Hz, and for Si5328 it’s 0.086 Hz. So completely different jitter performance as well as lock time (much slower on Si5328) are expected despite, once again, exactly the same register setting.
There is a slight mistake in the register map: There is no need to write to the register at address 131, as it contains statuses. And if anything, write zeroes to bits [2:0], so that these flags are maybe cleared, and surely not ones. The value in the register map is just the default value.
Same goes for register 132, but it’s actually useful to write to it for the sake of clearing LOL_FLG. Only the value in the register map is 0x02, which doesn’t clear this flag (and once again, it’s simply the default).
It’s worth to note that the last assignment in the register map is writing 0x40 to address 136, which initiates self calibration.
I2C interface
Silicon Lab’s documentation on the I2C interface is rather minimal (see Si53xx Reference Manual, section 6.13), and refers to the standard for the specifics (with a dead link). However quite obviously, the devices work with the common setting for register access with a single byte address: First byte is the I2C address, second is the register address, and the third is the data to be written. All acknowledged.
For a read access, it’s also the common method of a two-byte write sequence to set the register address, a restart (no stop condition) and one byte for the I2C address, and the second reads data.
If there’s more than one byte of data, the address is auto-incremented, pretty much as usual.
According to the data sheet, the RST pin must be asserted (held low) at least 1 μs, after which the I2C interface can be used no sooner than 10 ms (tREADY in the datasheets). It’s also possible to use the I2C interface to induce a reset with the RST_REG bit of register 136. The device performs a power-up reset after powering up (section 5.10 in the manual) however resetting is a good idea to clear previous register settings.
As for the I2C mux, there are two devices used in Xilinx’ boards: TCA9548A and PCA9548A, which are functionally equivalent but with different voltage ranges, according to TI: “Same functionality and pinout but is not an equivalent to the compared device”.
The I2C interface of the I2C mux consists of a single register. Hence there is no address byte. It’s just the address of the device followed by the setting of the register. Two bytes, old school style. Bit n corresponds to channel n. Setting a bit to ’1′ enables the channel, and setting it to ’0′ disconnects it. Plain and simple.
The I2C mux has a reset pin which must be high during operation. On KCU105, it’s connected to the FPGA with a pull-up resistor, so it’s not clear why UG917 says it “must be driven High”, but it clearly mustn’t be driven low. Also on KCU105, the I2C bus is shared between the FPGA and a “System Controller”, which is a small Zynq device with some secret sauce software on it. So there are two masters on a single I2C bus by plain wiring.
According to UG917′s Appendix C, the system controller does some initialization on power-up, hence generating traffic on the I2C bus. Even though it’s not said explicitly, it’s plausible to assume that it’s engineered to finish its business before the FPGA has had a chance to load its bitstream and try its own stunts. Unless the user requests some operations via the UART interface with the controller, but that’s a different story.
… or why does my GTH/GTY not come out of reset? Why are those reset_rx_done / reset_tx_done never asserted after a reset_all or a reset involving the CPLLs?
What’s this CPLL calibration thing about?
It turns out that some GTH/GTY’s on Ultrascale and Ultrascale+ FPGAs have problems with getting the CPLL to work reliably. I’ll leave PG182 for the details on which ones. So CPLL calibration is a cute name for some workaround logic, based upon the well-known principle, that if something doesn’t work, turn it off, turn it on, and then check again. Repeat.
Well, not quite that simple. There’s also some playing with a bit (they call it FBOOST, jump start or not?) in the secret-sauce CPLL_CFG0 setting.
This way or another, this extra piece of logic simply checks whether the CPLL is at its correct frequency, and if so, it does nothing. If the CPLL’s frequency isn’t as expected, with a certain tolerance, it powers the CPLL off and on (with CPLLPD), resets it (CPLLRESET) and also plays with that magic FBOOST bit. And then tries again, up to 15 times.
The need for access to the GT’s DRPs is not just for that magic register’s sake, though. One can’t just measure the CPLL’s frequency directly, as it’s a few GHz. An FPGA can only work with a divided version of this clock. As there are several possibilities for routing and division of clocks inside the GT to its clock outputs, and the clock dividers depend on the application’s configuration, there’s a need to bring one such clock output to give the CPLL’s output divided by a known number. TXOUTCLK was chosen for this purpose.
So much of the calibration logic does exactly that: It sets some DRP registers to set up a certain relation between the CPLL’s output and TXOUTCLK (divided by 20, in fact), it does its thing, and then returns those register’s values to what they were before.
A word of warning
My initial take on this CPLL calibration thing was to enable it for all targets (see below for how). Can’t hurt, can it? An extra check that the CPLL is fine before kicking off. What could possibly go wrong?
I did this on Vivado 2015.2, and all was fine. And then I tried on later Vivado version. Boom. The GTH didn’t come out of reset. More precisely, the CPLL calibration clearly failed.
I can’t say that I know exactly why, but I caught the change that makes the difference: Somewhere between 2015.2 and 2018.3, the Wizard started to set the GTH’s CPLL_INIT_CFG0 instantiation parameter to 16′b0000001010110010. Generating the IP with this parameter set to its old value, 16′b0000000000011110, made the GTH work properly again.
I compared the reset logic as well as the CPLL calibration logic, and even though there I found a few changes, they were pretty minor (and I also tried to revert some of them, but that didn’t make any difference).
So the conclusion is that the change in CPLL_INIT_CFG0 failed the CPLL calibration. Why? I have no idea. The meaning of this parameter is unknown. And the CPLL calibration just checks that the frequency is OK. So maybe it slows down the lock, so the CPLL isn’t ready when it’s checked? Possibly, but this info wouldn’t help very much anyhow.
Now, CPLL calibration is supposed to be enabled only for FPGA targets that are known to need it. The question is whether the Transceiver IP’s Wizard is clever enough to set CPLL_INIT_CFG0 to a value that won’t make the calibration fail on those. I have no idea.
By enabling CPLL calibration for a target that doesn’t need it, I selected an exotic option, but the result should have worked nevertheless. Surely it shouldn’t break from one Vivado version to another.
So the bottom line is: Don’t fiddle with this option, and if your GTH/GTY doesn’t come out of reset, consider turning CPLL calibration off, and see if that changes anything. And if so, I have no clear advice what to do. But at least the mystery will be resolved.
Note that…
- The CPLL calibration is triggered by the GT’s reset_all assertion, as well as with reset_*_pll_and_datapath, if the CPLL is used in the relevant data path. The “reset done” signal for a data path that depends on the CPLL is asserted only if and when the CPLL calibration was successful and the CPLL is locked.
- If cplllock_out (if exposed) is never asserted, this could indicate that the CPLL calibration failed. So it makes sense to wait indefinitely for it — better fail loudly than work with a wobbling clock.
- Because the DRP clock is used to measure the period of time for counting the number of cycles of the divided CPLL clock, its frequency must be set accurately in the Wizard. Otherwise, the CPLL calibration will most certainly fail, even if the CPLL is perfectly fine.
- The calibration state machine takes control of some GT ports (listed below) from when cpllreset_in is deasserted, and until the calibration state machine has finished, with success or failure.
- While the calibration takes place, and if the calibration ends up failing, the cplllock_out signal presented to the user logic is held low. Only when the calibration is finished successfully, is the GT’s CPLLLOCK connected to the user logic (after a slight delay, and synchronized with the DRP clock).
Activating the CPLL calibration feature
See “A word of warning” above. You probably don’t want to activate this feature for all FPGA targets.
There are three possible choices for whether the CPLL calibration module is activated in the Wizard’s transceiver. This can’t be set from the GUI, but by editing the XCI file manually. There are two parameters in that file, PARAM_VALUE.INCLUDE_CPLL_CAL and MODELPARAM_VALUE.C_INCLUDE_CPLL_CAL, which should have the same value as follows:
- 0 — Don’t activate.
- 1 — Do activate.
- 2 — Activate only for devices which the Wizard deems have a problem (default).
Changing it from the default 2 to 1 makes Vivado respond with locking the core saying it “contains stale content”. To resolve this, “upgrade” the IP, which triggers a warning that user intervention is necessary.
And indeed, three new ports are added, and this change this addition of ports is also reflected in the XCI file (but nothing else should change): gtwiz_gthe3_cpll_cal_txoutclk_period_in, gtwiz_gthe3_cpll_cal_cnt_tol_in and gtwiz_gthe3_cpll_cal_bufg_ce_in.
These are three input ports, so they have to be assigned values. PG182‘s Table 3-1 gives the formulas for that (good luck with that) and the dissection notes below explain these formulas. But the TL;DR version is:
- gtwiz_gthe3_cpll_cal_bufg_ce_in should be assigned with a constant 1′b1.
- gtwiz_gthe3_cpll_cal_txoutclk_period_in should be assigned with the constant value of P_CPLL_CAL_TXOUTCLK_PERIOD, as found in the transceiver IP’s synthesis report (e.g. mytransceiver_synth_1/runme.log).
- gtwiz_gthe3_cpll_cal_cnt_tol_in should be assigned with the constant value of P_CPLL_CAL_TXOUTCLK_PERIOD, divided by 100.
The description here relates to a single transceiver in the IP.
The meaning of gtwiz_gthe3_cpll_cal_txoutclk_period_in is as follows: Take the CPLL clock and divide it by 80. Count the number of clock cycles in a time period corresponding to 16000 DRP clock cycles. That’s the value to assign, as this is what the CPLL calibration logic expects to get.
gtwiz_gthe3_cpll_cal_cnt_tol_in is the number of counts that the result can be higher or lower than expected, and still the CPLL will be considered fine. As this is taken as the number of expected counts, divided by 100, this results in a ±1% clock frequency tolerance. Which is a good idea, given that common SSC clocking (PCIe, SATA, USB 3.0) might drag down the clock frequency by -5000 ppm, i.e. -0.5%.
The possibly tricky thing with setting these correctly is that they depend directly on the CPLL frequency. Given the data rate, there might be more than one possibility for a CPLL frequency, however it’s not expected that the Wizard will change it from run to run unless something fundamental is changed in the parameters (e.g. changing the data rate of one of the directions or both).
Besides, the CPLL frequency appears in the XCI file as MODELPARAM_VALUE.C_CPLL_VCO_FREQUENCY.
If the CPLL is activated deliberately, it’s recommended to verify that it actually takes place by setting a wrong value for gtwiz_gthe3_cpll_cal_txoutclk_period_in, and check that the calibration fails (cplllock_out remains low).
Which ports are affected?
Looking at ultragth_gtwizard_gthe3.v gives the list of ports that the CPLL calibration logic fiddles with. Within the CPLL calibration generate clause they’re assigned with certain values, and in the “else” clause, with the plain bypass:
// Assign signals as appropriate to bypass the CPLL calibration block when it is not instantiated
else begin : gen_no_cpll_cal
assign txprgdivresetdone_out = txprgdivresetdone_int;
assign cplllock_int = cplllock_ch_int;
assign drprdy_out = drprdy_int;
assign drpdo_out = drpdo_int;
assign cpllreset_ch_int = cpllreset_int;
assign cpllpd_ch_int = cpllpd_int;
assign txprogdivreset_ch_int = txprogdivreset_int;
assign txoutclksel_ch_int = txoutclksel_int;
assign drpaddr_ch_int = drpaddr_int;
assign drpdi_ch_int = drpdi_int;
assign drpen_ch_int = drpen_int;
assign drpwe_ch_int = drpwe_int;
end
Dissection of Wizard’s output
The name of the IP was ultragth in my case. That’s the significance of this name appearing all over this part.
The impact of changing the XCI file: In the Verilog files that are produced by the Wizard, MODELPARAM_VALUE.C_INCLUDE_CPLL_CAL is used directly when instantiating the ultragth_gtwizard_top, as the C_INCLUDE_CPLL_CAL instantiation parameter.
Also, the three new input ports are passed on to ultragth_gtwizard_top.v, rather than getting all zero assignments when they’re not exposed to the user application logic.
When activating the CPLL calibration (setting INCLUDE_CPLL_CAL to 1) additional constraints are also added to the constraint file for the IP, adding a few new false paths as well as making sure that the timing calculations for the TXOUTCLK is set according to the requested clock source. The latter is necessary, because the calibration logic fiddles with TXOUTCLKSEL during the calibration phase.
In ultragth_gtwizard_top.v the instantiation parameters and the three ports are just passed on to ultragth_gtwizard_gthe3.v, where the action happens.
First, the following defines are made (they like short names in Xilinx):
`define ultragth_gtwizard_gthe3_INCLUDE_CPLL_CAL__EXCLUDE 0
`define ultragth_gtwizard_gthe3_INCLUDE_CPLL_CAL__INCLUDE 1
`define ultragth_gtwizard_gthe3_INCLUDE_CPLL_CAL__DEPENDENT 2
and further down, we have this short and concise condition for enabling CPLL calibration:
if ((C_INCLUDE_CPLL_CAL == `ultragth_gtwizard_gthe3_INCLUDE_CPLL_CAL__INCLUDE) ||
(((C_INCLUDE_CPLL_CAL == `ultragth_gtwizard_gthe3_INCLUDE_CPLL_CAL__DEPENDENT) &&
((C_GT_REV == 11) ||
(C_GT_REV == 12) ||
(C_GT_REV == 14))) &&
(((C_TX_ENABLE == `ultragth_gtwizard_gthe3_TX_ENABLE__ENABLED) &&
(C_TX_PLL_TYPE == `ultragth_gtwizard_gthe3_TX_PLL_TYPE__CPLL)) ||
((C_RX_ENABLE == `ultragth_gtwizard_gthe3_RX_ENABLE__ENABLED) &&
(C_RX_PLL_TYPE == `ultragth_gtwizard_gthe3_RX_PLL_TYPE__CPLL)) ||
((C_TXPROGDIV_FREQ_ENABLE == `ultragth_gtwizard_gthe3_TXPROGDIV_FREQ_ENABLE__ENABLED) &&
(C_TXPROGDIV_FREQ_SOURCE == `ultragth_gtwizard_gthe3_TXPROGDIV_FREQ_SOURCE__CPLL))))) begin : gen_cpll_cal
which simply means that the CPLL calibration module should be generated if _INCLUDE_CPLL_CAL is 1 (as I changed it to), or if it’s 2 (default) and some conditions for enabling it automatically are met).
Further down, the hint for how to assign those three new ports is given. Namely, if CPLL was added automatically due to the default assignment and specific target FPGA, the values calculated by the Wizard itself are used
// The TXOUTCLK_PERIOD_IN and CNT_TOL_IN ports are normally driven by an internally-calculated value. When INCLUDE_CPLL_CAL is 1,
// they are driven as inputs for PLL-switching and rate change special cases, and the BUFG_GT CE input is provided by the user.
wire [(`ultragth_gtwizard_gthe3_N_CH* 18)-1:0] cpll_cal_txoutclk_period_int;
wire [(`ultragth_gtwizard_gthe3_N_CH* 18)-1:0] cpll_cal_cnt_tol_int;
wire [(`ultragth_gtwizard_gthe3_N_CH* 1)-1:0] cpll_cal_bufg_ce_int;
if (C_INCLUDE_CPLL_CAL == `ultragth_gtwizard_gthe3_INCLUDE_CPLL_CAL__INCLUDE) begin : gen_txoutclk_pd_input
assign cpll_cal_txoutclk_period_int = {`ultragth_gtwizard_gthe3_N_CH{gtwiz_gthe3_cpll_cal_txoutclk_period_in}};
assign cpll_cal_cnt_tol_int = {`ultragth_gtwizard_gthe3_N_CH{gtwiz_gthe3_cpll_cal_cnt_tol_in}};
assign cpll_cal_bufg_ce_int = {`ultragth_gtwizard_gthe3_N_CH{gtwiz_gthe3_cpll_cal_bufg_ce_in}};
end
else begin : gen_txoutclk_pd_internal
assign cpll_cal_txoutclk_period_int = {`ultragth_gtwizard_gthe3_N_CH{p_cpll_cal_txoutclk_period_int}};
assign cpll_cal_cnt_tol_int = {`ultragth_gtwizard_gthe3_N_CH{p_cpll_cal_txoutclk_period_div100_int}};
assign cpll_cal_bufg_ce_int = {`ultragth_gtwizard_gthe3_N_CH{1'b1}};
end
These `ultragth_gtwizard_gthe3_N_CH things are just duplication of the same vector, in case there are multiple channels for the same IP.
First, note that cpll_cal_bufg_ce is assigned constant 1. Not clear why this port is exposed at all.
And now to the calculated values. Given that it says
wire [15:0] p_cpll_cal_freq_count_window_int = P_CPLL_CAL_FREQ_COUNT_WINDOW;
wire [17:0] p_cpll_cal_txoutclk_period_int = P_CPLL_CAL_TXOUTCLK_PERIOD;
wire [15:0] p_cpll_cal_wait_deassert_cpllpd_int = P_CPLL_CAL_WAIT_DEASSERT_CPLLPD;
wire [17:0] p_cpll_cal_txoutclk_period_div100_int = P_CPLL_CAL_TXOUTCLK_PERIOD_DIV100;
a few rows above, and
localparam [15:0] P_CPLL_CAL_FREQ_COUNT_WINDOW = 16'd16000;
localparam [17:0] P_CPLL_CAL_TXOUTCLK_PERIOD = (C_CPLL_VCO_FREQUENCY/20) * (P_CPLL_CAL_FREQ_COUNT_WINDOW/(4*C_FREERUN_FREQUENCY));
localparam [15:0] P_CPLL_CAL_WAIT_DEASSERT_CPLLPD = 16'd256;
localparam [17:0] P_CPLL_CAL_TXOUTCLK_PERIOD_DIV100 = (C_CPLL_VCO_FREQUENCY/20) * (P_CPLL_CAL_FREQ_COUNT_WINDOW/(400*C_FREERUN_FREQUENCY));
localparam [25:0] P_CDR_TIMEOUT_FREERUN_CYC = (37000 * C_FREERUN_FREQUENCY) / C_RX_LINE_RATE;
it’s not all that difficult to do the math. And looking at Table 3-1 of PG182, the formulas match perfectly, but I didn’t feel very reassured by those.
So why bother? Much easier to use the values calculated by the tools, as they appear in ultragth_synth_1/runme.log (for a 5 Gb/s rate and reference clock of 125 MHz, but YMMV as there’s more than one way to achieve a line rate):
Parameter P_CPLL_CAL_FREQ_COUNT_WINDOW bound to: 16'b0011111010000000
Parameter P_CPLL_CAL_TXOUTCLK_PERIOD bound to: 18'b000000111110100000
Parameter P_CPLL_CAL_WAIT_DEASSERT_CPLLPD bound to: 16'b0000000100000000
Parameter P_CPLL_CAL_TXOUTCLK_PERIOD_DIV100 bound to: 18'b000000000000101000
Parameter P_CDR_TIMEOUT_FREERUN_CYC bound to: 26'b00000011100001110101001000
The bottom line is hence to set gtwiz_gthe3_cpll_cal_txoutclk_period_in to 18′b000000111110100000, and gtwiz_gthe3_cpll_cal_cnt_tol_in to 18′b000000000000101000. Which is 4000 and 40 in plain decimal, respectively.
Dissection of CPLL Calibration module (specifically)
The CPLL calibrator is implemented in gtwizard_ultrascale_v1_5/hdl/verilog/gtwizard_ultrascale_v1_5_gthe3_cpll_cal.v.
Some basic reverse engineering. This may be inaccurate, as I wasn’t very careful about the gory details on this matter. Also, when I say that a register is modified below, it’s to values that are listed after the outline of the state machine (further below).
So just to get an idea:
- TXCLKOUTSEL start with value 0.
- Using the DRP ports, it fetches the existing values of the PROGCLK_SEL and PROGDIV registers, and modifies their values.
- It changes TXCLKOUTSEL to 3′b101, i.e. TXCLKOUT is routed to TXPROGDIVCLK. This can be more than one clock source, but it’s the CPLL directly, divided by PROGDIV (judging by the value assigned to PROGCLK_SEL).
- CPLLRESET is asserted for 32 clock cycles, and then deasserted.
The state machine now enters a loop as follows.
- The state machine waits 16384 clock cycles. This is essentially waiting for the CPLL to lock, however the CPLL’s lock detector isn’t monitored. Rather, it waits this fixed amount of time.
- txprogdivreset is asserted for 32 clock cycles.
- The state machine waits for the assertion of the GT’s txprgdivresetdone (possibly indefinitely).
- The state machine checks that the frequency counter’s output (more on this below) is in the range of TXOUTCLK_PERIOD_IN ± CNT_TOL_IN. If so, it exits this loop (think C “break” here), with the intention of declaring success. If not, and this is the 15th failed attempt, it exits the loop as well, but with the intention of declaring failure. Otherwise, it continues as follows.
- The FBOOST DRP register is read and then modified.
- 32 clock cycles later, CPLLRESET is asserted.
- 32 clock cycles later, CPLLPD is asserted for a number of clock cycles (determined by the module’s WAIT_DEASSERT_CPLLPD_IN input), and then deasserted (the CPLL is powered down and up!).
- 32 clock cycles later, CPLLRESET is deasserted.
- The FBOOST DRP register is restored to its original value.
- The state machine continues at the beginning of this loop.
And the final sequence, after exiting the loop:
- PROGDIV and PROGCLK_SEL are restored to its original value
- CPLLRESET is asserted for 32 clock cycles, and then deasserted.
- The state machine waits for the assertion of the GT’s cplllock, possibly indefinitely.
- txprogdivreset is asserted for 32 clock cycles.
- The state machine waits for the assertion of the GT’s txprgdivresetdone (possibly indefinitely).
- The state machine finishes. At this point one of the module’s CPLL_CAL_FAIL or CPLL_CAL_DONE is asserted, depending on the reason for exiting the loop.
As for the values assigned when I said “modified” above, I won’t get into that in detail, but just put a related snippet of code. Note that these values are often shifted to their correct place in the DRP registers in order to fulfill their purpose:
localparam [1:0] MOD_PROGCLK_SEL = 2'b10;
localparam [15:0] MOD_PROGDIV_CFG = 16'hA1A2; //divider 20
localparam [2:0] MOD_TXOUTCLK_SEL = 3'b101;
localparam MOD_FBOOST = 1'b1;
Now, a word about the frequency counter: It’s a bit complicated because of clock domain issues, but what it does is to divide the clock under test by 4, and then count how many cycles the divided clock has during a period of FREQ_COUNT_WINDOW_IN DRP clocks. Which is hardcoded as 16000 clocks.
If we’ll trust the comment saying that PROGDIV is set to 20, it means that the frequency counter gets the CPLL clock divided by 20. It then divides this further by 4, and counts this for 16000 DRP clocks. Which is exactly the formula given in Table 3-1 of PG182.
Are we having fun?
… or any other “unsupported” Linux distribution.
… or: How to trick the installer into thinking you’re running one of the supported OSes.
So I wanted to install Vivado 2020.1 on my Linux Mint 19 (Tara) machine. I downloaded the full package, and ran xsetup. A splash window appeared, and soon after it another window popped up, saying that my distribution wasn’t supported, listing those that were, and telling me that I could click “OK” to continue nevertheless. Which I did.
But then nothing happened. Completely stuck. And there was an error message on the console, reading:
$ ./xsetup
Exception in thread "SPLASH_LOAD_MESSAGE" java.lang.IllegalStateException: no splash screen available
at java.desktop/java.awt.SplashScreen.checkVisible(Unknown Source)
at java.desktop/java.awt.SplashScreen.getBounds(Unknown Source)
at java.desktop/java.awt.SplashScreen.getSize(Unknown Source)
at com.xilinx.installer.gui.H.run(Unknown Source)
Exception in thread "main" java.lang.IllegalStateException: no splash screen available
at java.desktop/java.awt.SplashScreen.checkVisible(Unknown Source)
at java.desktop/java.awt.SplashScreen.close(Unknown Source)
at com.xilinx.installer.gui.G.b(Unknown Source)
at com.xilinx.installer.gui.InstallerGUI.G(Unknown Source)
at com.xilinx.installer.gui.InstallerGUI.e(Unknown Source)
at com.xilinx.installer.api.InstallerLauncher.main(Unknown Source)
This issue is discussed in this thread of Xilinx’ forum. Don’t let the “Solved” title mislead you: They didn’t solve it at all. But one of the answers there gave me the direction: Fool the installer to think my OS is supported, after all. In this specific case there was no problem with the OS, but a bug in the installer that caused it to behave silly after that popup window.
It was also suggested to install Vivado in batch mode with
./xsetup -b ConfigGen
however this doesn’t allow for selecting what devices I want to support. And this is a matter of tons of disk space.
So to make it work, I made changes in some files in /etc, and kept the original files in a separate directory. I also needed to move /etc/lsb-release into that directory as well, so it won’t mess up things
I changed /etc/os-release (which is in fact a symlink to ../usr/lib/os-release on my machine, so watch it) to
NAME="Ubuntu"
VERSION="16.04.6 LTS (Xenial Xerus)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 16.04.6 LTS"
VERSION_ID="16.04"
HOME_URL="http://www.ubuntu.com/"
SUPPORT_URL="http://help.ubuntu.com/"
BUG_REPORT_URL="http://bugs.launchpad.net/ubuntu/"
VERSION_CODENAME=xenial
UBUNTU_CODENAME=xenial
and /etc/lsb-release
DISTRIB_ID=Ubuntu
DISTRIB_RELEASE=16.04
DISTRIB_CODENAME=xenial
DISTRIB_DESCRIPTION="Ubuntu 16.04.6 LTS"
This might very well be an overkill, but once I got the installation running, I didn’t bother check what the minimal change is. Those who are successful might comment below on this, maybe?
Note that this might not work on a Red Hat based OS, because it seems that there are distribution-dependent issues. Since Linux Mint 19 is derived from Ubuntu 18.04, faking an earlier Ubuntu distro didn’t cause any problems.
This failed for me repeatedly at first, because I kept a copy of the original files in the /etc directory. This made the installation tool read the original files as well as the modified ones. Using strace, I found that the tool called cat with
execve("/bin/cat", ["cat", "/etc/lsb-release", "/etc/os-release", "/etc/real-upstream-release"], 0x5559689a47b0 /* 55 vars */) = 0
It looks like “cat” was called with some wildcard, maybe /etc/*-release? So the solution was to move the original files away to a new directory /etc/real/, and create the fake ones in their place.
Another problem was probably that /etc/lsb-release is a directory in my system. That made the “cat” return a failure status, which can’t be good.
Of course I have an opinion on the entire OS support situation, but if you’re reading this, odds that you agree with me anyhow.
Just a quick note to remind myself: There’s a gap between the size of a disk, the used space and the available space. It’s quite well-known that a certain percentage of the disk (that’s 200 GB on a 3.6 TB backup disk) is saved for root-only writes.
So the reminder is: No problem filling the disk beyond the Available = zero blocks point if you’re root. And it doesn’t matter if the files written don’t belong to root. The show goes on.
Also, the numbers shown by df are updated only when the file written to is closed. So if a very long file is being copied, it might freeze for a while, and then boom.
This is important in particular when using the disk just for backing up data, because the process doing the backup is root, but the files aren’t.
But whatever you do, don’t press CTRL-C while the extracting goes on. If tar quits in the middle, there will be file ownerships and permissions unset, and symlinks set to zero-length files too. It wrecks the entire backup, even in places far away from where tar was working when it was stopped.
When to do this
Because Gnome desktop is sure it knows what’s best for me, and it’s virtually impossible to just tell it that I want this screen resolution mode and no other, there is only one option left: Lie about the monitor’s graphics mode capabilities. Make the kernel feed it with fake screen information (EDID), that basically says “there is only this resolution”. Leave it with one choice only.
What is EDID? It’s a tiny chunk of information that is stored on a small EEPROM memory on the monitor. The graphics card fetches this blob through two I2C wires on the cable, and deduces from it what graphics mode (with painfully detailed timing parameters) the monitor supports. It’s that little hex blob that appears when you go xrandr –verbose.
I should mention a post in Gentoo forum, which suggests making X ignore EDID info by using
Option "UseEDID" "false"
Option "UseEDIDFreqs" "false"
in /etc/X11/xorg.conf, or is it a file in /usr/share/X11/xorg.conf.d/? And then just set the screen mode old-school. Didn’t bother to check this. There are too many players in this game. Faking EDID seemed to be a much better idea than to ask politely not to consider it.
How to feed a fake EDID
The name of the game is Kernel Mode Setting (KMS). Among others, it allows loading a file from /lib/firmware which is used as the screen information (EDID) instead of getting it from the screen.
For this to work, the CONFIG_DRM_LOAD_EDID_FIRMWARE kernel compilation must be enabled (set to “y”).
Note that unless Early KMS is required, the firmware file is loaded after the initramfs stage. In other words, it’s not necessary to push the fake EDID file into the initramfs, but it’s OK to have it present only in the filesystem that is mounted after the initramfs.
The EDID file should be stored in /lib/firmware/edid (create the directory if necessary) and the following command should be added to the kernel command line:
drm_kms_helper.edid_firmware=edid/fake_edid.bin
(for kernels 4.15 and later, there’s a drm.edid_firmware parameter that is supposed to be better in some way).
Generating a custom EDID file
I needed a special graphics mode to solve a problem with my OLED screen. Meaning I had to cook my own EDID file. It turned out quite easy, actually.
The kernel’s doc for this is Documentation/admin-guide/edid.rst
In the kernel’s tools/edid, edit one of the asm files (e.g. 1920x1080.S) and set the parameters to the correct mode. This file has just defines. The actual data format is produced in edid.S, which is included at the bottom. The output in this case is 1920x1080.bin. Note that the C file (1920x1080.c) is an output as well in this case — for reference of some other use, I guess.
And then just type “make” in tools/edid/ (don’t compile the kernel, that’s really not necessary for this).
The numbers in the asm file are in a slightly different notation, as explained in the kernel doc. Not a big deal to figure out.
In my case, I translated this xrandr mode line
oledblack (0x10b) 173.000MHz -HSync +VSync
h: width 1920 start 2048 end 2248 total 2576 skew 0 clock 67.16KHz
v: height 1080 start 1083 end 1088 total 1120 clock 59.96Hz
to this:
/* EDID */
#define VERSION 1
#define REVISION 3
/* Display */
#define CLOCK 173000 /* kHz */
#define XPIX 1920
#define YPIX 1080
#define XY_RATIO XY_RATIO_16_9
#define XBLANK 656
#define YBLANK 40
#define XOFFSET 128
#define XPULSE 200
#define YOFFSET 3
#define YPULSE 5
#define DPI 96
#define VFREQ 60 /* Hz */
#define TIMING_NAME "Linux FHD"
/* No ESTABLISHED_TIMINGx_BITS */
#define HSYNC_POL 0
#define VSYNC_POL 0
#include "edid.S"
There seems to be a distinction between standard resolution modes and those that aren’t. I got away with this, because 1920x1080 is a standard mode. It may be slightly trickier with a non-standard mode.
When it works
This is what it looks like when all is well. First, the kernel logs. In my case, because I didn’t put the file in the initramfs, loading it fails twice:
[ 3.517734] platform HDMI-A-3: Direct firmware load for edid/1920x1080.bin failed with error -2
[ 3.517800] [drm:drm_load_edid_firmware [drm_kms_helper]] *ERROR* Requesting EDID firmware "edid/1920x1080.bin" failed (err=-2)
and again:
[ 4.104528] platform HDMI-A-3: Direct firmware load for edid/1920x1080.bin failed with error -2
[ 4.104580] [drm:drm_load_edid_firmware [drm_kms_helper]] *ERROR* Requesting EDID firmware "edid/1920x1080.bin" failed (err=-2)
But then, much later, it loads properly:
[ 19.864966] [drm] Got external EDID base block and 0 extensions from "edid/1920x1080.bin" for connector "HDMI-A-3"
[ 93.298915] [drm] Got external EDID base block and 0 extensions from "edid/1920x1080.bin" for connector "HDMI-A-3"
[ 109.573124] [drm] Got external EDID base block and 0 extensions from "edid/1920x1080.bin" for connector "HDMI-A-3"
[ 1247.290084] [drm] Got external EDID base block and 0 extensions from "edid/1920x1080.bin" for connector "HDMI-A-3"
Why several times? Well, the screen resolution is probably set up several times as the system goes up. There’s clearly a quick screen flash a few seconds after the desktop goes up. I don’t know exactly why, and at this stage I don’t care. The screen is at the only mode allowed, and that’s it.
And now to how xrandr sees the situation:
$ xrandr -d :0 --verbose
[ ... ]
HDMI3 connected primary 1920x1080+0+0 (0x10c) normal (normal left inverted right x axis y axis) 500mm x 281mm
Identifier: 0x48
Timestamp: 21339
Subpixel: unknown
Gamma: 1.0:1.0:1.0
Brightness: 1.0
Clones:
CRTC: 0
CRTCs: 0
Transform: 1.000000 0.000000 0.000000
0.000000 1.000000 0.000000
0.000000 0.000000 1.000000
filter:
EDID:
00ffffffffffff0031d8000000000000
051601036d321c78ea5ec0a4594a9825
205054000000d1c00101010101010101
010101010101944380907238284080c8
3500f41911000018000000ff004c696e
75782023300a20202020000000fd003b
3d424412000a202020202020000000fc
004c696e7578204648440a2020200045
aspect ratio: Automatic
supported: Automatic, 4:3, 16:9
Broadcast RGB: Automatic
supported: Automatic, Full, Limited 16:235
audio: auto
supported: force-dvi, off, auto, on
1920x1080 (0x10c) 173.000MHz -HSync -VSync *current +preferred
h: width 1920 start 2048 end 2248 total 2576 skew 0 clock 67.16KHz
v: height 1080 start 1083 end 1088 total 1120 clock 59.96Hz
Compare the EDID part with 1920x1080.c, which was created along with the binary:
{
0x00, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0x00,
0x31, 0xd8, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x05, 0x16, 0x01, 0x03, 0x6d, 0x32, 0x1c, 0x78,
0xea, 0x5e, 0xc0, 0xa4, 0x59, 0x4a, 0x98, 0x25,
0x20, 0x50, 0x54, 0x00, 0x00, 0x00, 0xd1, 0xc0,
0x01, 0x01, 0x01, 0x01, 0x01, 0x01, 0x01, 0x01,
0x01, 0x01, 0x01, 0x01, 0x01, 0x01, 0x94, 0x43,
0x80, 0x90, 0x72, 0x38, 0x28, 0x40, 0x80, 0xc8,
0x35, 0x00, 0xf4, 0x19, 0x11, 0x00, 0x00, 0x18,
0x00, 0x00, 0x00, 0xff, 0x00, 0x4c, 0x69, 0x6e,
0x75, 0x78, 0x20, 0x23, 0x30, 0x0a, 0x20, 0x20,
0x20, 0x20, 0x00, 0x00, 0x00, 0xfd, 0x00, 0x3b,
0x3d, 0x42, 0x44, 0x12, 0x00, 0x0a, 0x20, 0x20,
0x20, 0x20, 0x20, 0x20, 0x00, 0x00, 0x00, 0xfc,
0x00, 0x4c, 0x69, 0x6e, 0x75, 0x78, 0x20, 0x46,
0x48, 0x44, 0x0a, 0x20, 0x20, 0x20, 0x00, 0x45,
};
So it definitely took the bait.
Upgrading the kernel should be quick and painless…
After upgrading the kernel from v5.3 to 5.7, a lot of systemd services failed (Debian 8), in particular systemd-remount-fs:
● systemd-remount-fs.service - Remount Root and Kernel File Systems
Loaded: loaded (/lib/systemd/system/systemd-remount-fs.service; static)
Active: failed (Result: exit-code) since Sun 2020-07-26 15:28:15 IDT; 17min ago
Docs: man:systemd-remount-fs.service(8)
http://www.freedesktop.org/wiki/Software/systemd/APIFileSystems
Process: 223 ExecStart=/lib/systemd/systemd-remount-fs (code=exited, status=1/FAILURE)
Main PID: 223 (code=exited, status=1/FAILURE)
Jul 26 15:28:15 systemd[1]: systemd-remount-fs.service: main process exited, code=exited, status=1/FAILURE
Jul 26 15:28:15 systemd[1]: Failed to start Remount Root and Kernel File Systems.
Jul 26 15:28:15 systemd[1]: Unit systemd-remount-fs.service entered failed state.
and indeed, the root NFS remained read-only (checked with “mount” command), which explains why so many other services failed.
After an strace session, I managed to nail down the problem: The system call to mount(), which was supposed to do the remount, simply failed:
mount("10.1.1.1:/path/to/debian-82", "/", 0x61a250, MS_REMOUNT, "addr=10.1.1.1") = -1 EINVAL (Invalid argument)
On the other hand, any attempt to remount another read-only NFS mount, which had been mounted the regular way (i.e. after boot) went through clean, of course:
mount("10.1.1.1:/path/to/debian-82", "/mnt/tmp", 0x61a230, MS_REMOUNT, "addr=10.1.1.1") = 0
The only apparent difference between the two cases is the third argument, which is ignored for MS_REMOUNT according to the manpage.
The manpage also says something about the EINVAL return value:
EINVAL A remount operation (MS_REMOUNT) was attempted, but source was not already mounted on target.
A hint to the problem could be that the type of the mount, as listed in /proc/mounts, is “nfs” for the root mounted filesystem, but “nfs4″ for the one in /mnt/tmp. The reason for this difference isn’t completely clear.
The solution
So it’s all about that little hint: If the nfsroot is selected to boot as version 4, then there’s no problem remounting it. Why it made a difference from one kernel version to another is beyond me. So the fix is to add nfsvers=4 to the nfsroot assignment. Something like
root=/dev/nfs nfsroot=10.1.1.1:/path/to/debian-82,nfsvers=4
For the record, I re-ran the remount command with strace again, and exactly the same system call was made, including that most-likely-ignored 0x61a250 argument, and it simply returned success (zero) instead of EINVAL.
As a side note, the rootfstype=nfs in the kernel command line is completely ignored. Write any junk instead of “nfs” and it makes no difference.
Another yak shaved successfully.
What’s this
Every now and then I find myself looking at an Oops or kernel panic, reminding myself how to approach this. So this is where I write down the jots as I go. This isn’t very organized.
Disassembly
- First thing first: Disassemble the relevant parts:
$ objdump -DS driver.ko > ~/Desktop/driver.asm
- Doing this on the .o or .ko file gives exactly the same result. Like diff-exactly.
- Or if the region of interest belongs to the kernel itself, this can be done (in the kernel tree after a matching compilation):
$ objdump -DS kernel/locking/spinlock.o > ~/Desktop/spinlock.asm
- Or, directly on the entire kernel image (at the root if the kernel tree of a compiled kernel). This doesn’t just saves looking up where the relevant function is defined (which object file), but the labels used in the function will be correct, even when using -d instead of -DS.
$ objdump -d vmlinux
Then search for the function with a colon at the end, so it matches the beginning of the function, and not references to it. E.g.
<_raw_spin_lock_irqsave>:
- The -DS flag adds inline source in the disassembly when possible. If it fails, go for plain -d instead.
- With the -d flag, usage of labels (in particular calls to functions) outside the disassembled module will appear to be to the address following the command, because the address after the opcode is zero. The disassembly is done on binary that is before linking.
Stack trace
- The offending command is where the RIP part points at. It’s given in the same hex format as the stack trace.
- The stack trace contains the offset points in a function (i.e. after a label) to after the call to the function. It’s label+hex/hex.
- The hex notion is relative to a label as seen in the disassembly (the title row before each segment). The first offset number is the offset relative to the label, and the second is the length of the function (i.e. the offset to the next label).
- <IRQ> and </IRQ> markups show the part that ran as an interrupt (not surprisingly).
- In x86 assembly notation, the first argument is source, the second is destination.
And here’s just a sample of a stack trace within an IRQ:
<IRQ>
dump_stack+0x46/0x59
__report_bad_irq+0x40/0xa9
note_interrupt+0x1c9/0x217
handle_irq_event_percpu+0x4c/0x6a
handle_irq_event+0x2e/0x4c
handle_fasteoi_irq+0x9b/0xff
handle_irq+0x19/0x1c
do_IRQ+0x61/0x106
common_interrupt+0xf/0xf
</IRQ>
To be continued, of course…
I sent that?
One morning, I got a bounce message from my own mail sendmail server, saying that it failed to deliver a message I never sent. That’s red alert. It means that someone managed to provoke my mail server to send an outbound message. It’s red alert, because my mail server effectively relays spam to any destination that the spammer chooses. This could ruin the server’s reputation horribly.
It turned out that an arriving mail required a return receipt, which was destined to just some mail address. There’s an SMTP feature called Delivery Status Notification (DSN), which allows the client connecting to the mail server to ask for a mail “in return”, informing the sender of the mail if it was properly delivered. The problem is that the MAIL FROM / From addresses could be spoofed, pointing at a destination to spam. Congratulations, your mail server was just tricked into sending spam. This kind of trickery is called backscatter.
Checking my own mail logs, the DSN is a virtually unused feature. So it’s probably just something spammers can take advantage of.
The relevant RFC for DSN is RFC1891. Further explanations on DSN can be found in one of sendmail’s tutorial pages.
How to turn DSN off
First, I recommend checking if it’s not disabled already, as explained below. In particular, if the paranoid-level “goaway” privacy option is used, DSN is turned off anyhow.
It’s actually easy. Add the noreceipts option to PrivacyOptions. More precisely, edit /etc/mail/sendmail.mc and add noreceipts to the list of already existing options. In my case, it ended up as
define(`confPRIVACY_FLAGS',dnl
`needmailhelo,needexpnhelo,needvrfyhelo,restrictqrun,restrictexpand,nobodyreturn,noetrn,noexpn,novrfy,noactualrecipient,noreceipts')dnl
and then run “make” in /etc/mail, and restart sendmail.
Turning off DSN is often recommended against in different sendmail guides, because it’s considered a “valuable feature” or so. As mentioned above, I haven’t seen it used by anyone else than spammers.
Will my mail server do DSN?
Easy to check, because the server announces its willingness to fulfill DSN requests at the beginning of the SMTP session, with the line marked in red in the sample session below:
<<< 220 mx.mymailserver.com ESMTP MTA; Wed, 15 Jul 2020 10:22:32 GMT
>>> EHLO localhost.localdomain
<<< 250-mx.mymailserver.com Hello 46-117-33-227.bb.netvision.net.il [46.117.33.227], pleased to meet you
<<< 250-ENHANCEDSTATUSCODES
<<< 250-PIPELINING
<<< 250-8BITMIME
<<< 250-SIZE
<<< 250-DSN
<<< 250-DELIVERBY
<<< 250 HELP
>>> MAIL FROM:<spamvictim@billauer.co.il>
<<< 250 2.1.0 <spamvictim@billauer.co.il>... Sender ok
>>> RCPT TO:<legal_address@billauer.co.il> NOTIFY=SUCCESS
<<< 250 2.1.5 <legal_address@billauer.co.il>... Recipient ok
>>> DATA
<<< 354 Enter mail, end with "." on a line by itself
>>> MIME-Version: 1.0
>>> From: spamvictim@billauer.co.il
>>> To: legal_address@billauer.co.il
>>> Subject: Testing email.
>>>
>>>
>>> Just a test, please ignore
>>> .
<<< 250 2.0.0 06FAMWa1014200 Message accepted for delivery
>>> QUIT
<<< 221 2.0.0 mx.mymailserver.com closing connection
To test a mail server for its behavior with DSN, the script that I’ve already published can be used. To make it request a return receipt, the two lines that set the SMTP recipient should be changed to
die("Failed to set receipient\n")
if (! ($smtp->recipient( ($to_addr ), { Notify => ['SUCCESS'] } ) ) );
This change causes the NOTIFY=SUCCESS part in the RCPT TO line, which effectively requests a receipt from the server when the mail is properly delivered.
Note that if DSN isn’t supported by the mail server (possibly because of the privacy option fix shown above), the SMPT session looks exactly the same, except that the SMTP line marked in red will be absent. Then the mail server just ignores the NOTIFY=SUCCESS part silently, and responds exactly as before.
However when running the Perl script, the Net::SMTP will be kind enough to issue a warning to its stderr:
Net::SMTP::recipient: DSN option not supported by host at ./testmail.pl line 36.
The mail addresses I used in the sample session above are bogus, of courses, but note that the spam victim is the sender of the email, because that’s where the return receipt goes. On top of that, the RCPT TO address will also get a spam message, but that’s the smaller problem, as it’s yet another spam message arriving — not one that is sent away from our server.
I should also mention that Notify can be a comma-separated list of events, e.g.
RCPT TO:<bad_address@billauer.co.il> NOTIFY=SUCCESS,FAILURE,DELAY
however FAILURE doesn’t include the user not being known to the server, in which case the message is dropped anyhow without any DSN message generated. So as a spam trick, one can’t send mails to random addresses, and issue spam bounce messages because they failed. That would have been too easy.
In the mail logs
The sample session shown above causes the following lines in mail.log. Note the line marked in red, which indicates that the return receipt mechanism was fired off.
Jul 15 10:15:31 sm-mta[12697]: 06FAFTbL012697: from=<spamvictim@billauer.co.il>, size=121, class=0, nrcpts=1, msgid=<202007151015.06FAFTbL012697@mx.mymailserver.com>, proto=ESMTP, daemon=IPv4-port-587, relay=46-117-33-227.bb.netvision.net.il
[46.117.33.227]
Jul 15 10:15:31 sm-mta[12698]: 06FAFTbL012697: to=<legal_address@billauer.co.il>, ctladdr=<spamvictim@billauer.co.il> (1010/500), delay=00:00:01, xdelay=00:00:00, mailer=local, pri=30456, dsn=2.0.0, stat=Sent
Jul 15 10:15:31 sm-mta[12698]: 06FAFTbL012697: 06FAFVbL012698: DSN: Return receipt
Jul 15 10:15:31 sm-mta[12698]: 06FAFVbL012698: to=<spamvictim@billauer.co.il>, delay=00:00:00, xdelay=00:00:00, mailer=local, pri=30000, dsn=2.0.0, stat=Sent
The receipt
Since I’m at it, this is what a receipt message for the sample session above looks like:
Received: from localhost (localhost) by mx.mymailserver.com
(8.14.4/8.14.4/Debian-8+deb8u2) id 06FAFVbL012698; Wed, 15 Jul 2020
10:15:31 GMT
Date: Wed, 15 Jul 2020 10:15:31 GMT
From: Mail Delivery Subsystem <MAILER-DAEMON@billauer.co.il>
Message-ID: <202007151015.06FAFVbL012698@mx.mymailserver.com>
To: <spamvictim@billauer.co.il>
MIME-Version: 1.0
Content-Type: multipart/report; report-type=delivery-status;
boundary="06FAFVbL012698.1594808131/mx.mymailserver.com"
Subject: Return receipt
Auto-Submitted: auto-generated (return-receipt)
X-Mail-Filter: main
This is a MIME-encapsulated message
--06FAFVbL012698.1594808131/mx.mymailserver.com
The original message was received at Wed, 15 Jul 2020 10:15:30 GMT
from 46-117-33-227.bb.netvision.net.il [46.117.33.227]
----- The following addresses had successful delivery notifications -----
<legal_address@billauer.co.il> (successfully delivered to mailbox)
----- Transcript of session follows -----
<legal_address@billauer.co.il>... Successfully delivered
--06FAFVbL012698.1594808131/mx.mymailserver.com
Content-Type: message/delivery-status
Reporting-MTA: dns; mx.mymailserver.com
Received-From-MTA: DNS; 46-117-33-227.bb.netvision.net.il
Arrival-Date: Wed, 15 Jul 2020 10:15:30 GMT
Final-Recipient: RFC822; legal_address@billauer.co.il
Action: delivered (to mailbox)
Status: 2.1.5
Last-Attempt-Date: Wed, 15 Jul 2020 10:15:31 GMT
--06FAFVbL012698.1594808131/mx.mymailserver.com
Content-Type: text/rfc822-headers
Return-Path: <spamvictim@billauer.co.il>
Received: from localhost.localdomain (46-117-33-227.bb.netvision.net.il [46.117.33.227])
by mx.mymailserver.com (8.14.4/8.14.4/Debian-8+deb8u2) with ESMTP id 06FAFTbL012697
for <legal_address@billauer.co.il>; Wed, 15 Jul 2020 10:15:30 GMT
Date: Wed, 15 Jul 2020 10:15:29 GMT
Message-Id: <202007151015.06FAFTbL012697@mx.mymailserver.com>
MIME-Version: 1.0
From: spamvictim@billauer.co.il
To: legal_address@billauer.co.il
Subject: Testing email.
--06FAFVbL012698.1594808131/mx.mymailserver.com--
But note that if DSN is used by a spammer to trick our mail server, we will get the failure notice that results from sending this message to the other server. If we’re lucky enough to get anything at all: If the message is accepted, we’ll never know our server has been sending spam.
A short one: What to do if unmount is impossible with a
# umount /path/to/mount
umount: /path/to/mount: target is busy
but grepping the output of lsof for the said path yields nothing. In other words, the mount is busy, but no process can be blamed for accessing it (even as a home directory).
If this happens, odds are that it’s an NFS mount, held by some remote machine. The access might have been over long ago, but the mount is still considered busy. So the solution for this case is simple: Restart the NFS daemon. On Linux Mint 19 (and probably a lot of others) it’s simply
# systemctl restart nfs-server
and after this, umount is sucessful (hopefully…)
Introduction
So I got myself an LG OLED65B9. It’s huge and a really nice piece of electronics. I opted out the bells and whistles, and connected it via HDMI to my already existing media computer, running Linux Mint 18.1. All I wanted was a plain (yet very high quality) display.
However at some point I noticed that black wasn’t displayed as black. I opened GIMP, drew a huge black rectangle, and it displayed as dark grey. At first I thought that the screen was defective (or that I was overly optimistic expecting that black would be complete darkness), but then I tried an image from a USB stick, and reassured myself that black is displayed as complete darkness. As it should. Or why did I pay extra for an OLED?
Because I skipped the “play with the new toy” phase with this display, I’m 100% it’s with its factory settings. It’s not something I messed up.
I should mention that I use plain HD resolution of 1920x1080. The screen can do much better than that (see list of resolutions below), and defaults at 3840x2160 with my computer, but it’s quite pointless: Don’t know about you, I have nothing to show that goes higher than 1080p. And the computer’s graphics stutters at 4k UHD. So why push it?
I have a previous post on graphics modes, and one on the setup of the Brix media center computer involved.
So why is black displayed LCD style?
The truth is that I don’t know for sure. But it seems to be a problem only with standard 16:9 graphics modes. When switching to modes that are typical for computers (5:4 and 4:3 aspect ratios), the image was stretched to the entire screen, and black areas showed as pitch black. I’m not sure about this conclusion, and even less do I have an idea why this would happen or why a properly designed display would “correct” a pixel that arrives as RGB all zeros to something brighter.
Also, the black level on text consoles (Ctrl-Shift-F1 style) is horrible. But I’m not sure about which resolution they use.
An idea that crossed my mind is that maybe the pixels are sent as YCbCr in some modes or maybe the computer goes “Hey, I’m a TV now, let’s do some color correction nobody asked for” when standard HDTV aspect ratios are used. If any, I would go for the second possibility. But xrandr’s verbose output implies that both brightness and gamma are set to 1.0 for the relevant HDMI output, even when black isn’t black.
The graphics adapter is Intel Celeron J3160′s on-chip “HD Graphics” processor (8086:22b1) so nothing fancy here.
Update, 24.2.21: It actually seems like there’s an explanation. The computer uses the i915 module on a Linux kernel 4.4.0-53-generic. Why is that important? Well, look at the output of xrandr below. It says “Broadcast RGB: Automatic”. That means that the color range is set automatically by the driver. And indeed one of the options is “Limited 16:235″ which means that the darkest black is actually level 16 out of 255, as mentioned here, and passionately discussed here and here.
The automatic mode was added to the driver in kernel commit 55bc60db5988c8366751d3d04dd690698a53412c.
The interesting part seems to be this change in drivers/gpu/drm/i915/intel_hdmi.c (became drivers/gpu/drm/i915/display/intel_hdmi.c later):
+ if (intel_hdmi->color_range_auto) {
+ /* See CEA-861-E - 5.1 Default Encoding Parameters */
+ if (intel_hdmi->has_hdmi_sink &&
+ drm_mode_cea_vic(adjusted_mode) > 1)
+ intel_hdmi->color_range = SDVO_COLOR_RANGE_16_235;
+ else
+ intel_hdmi->color_range = 0;
+ }
The call to drm_mode_cea_vic() apparently says, that if there’s a known Video Identification Code (VIC) for the timing parameters in use (i.e., there’s a standard mode corresponding what is used), then the limited color range is applied. Why? Because. I don’t know if this was fixed since.
Thanks to Nate Long for pointing out the discussions on the kernel module.
The fix
This just worked for me, and I didn’t feel like playing with it further. So I can’t assure that this is a consistent solution, but it actually seems that way.
The idea is that since the problem arises with standard 16:9 modes, maybe make up a non-standard one?
Unlike the case with my previous TV, using cvt to calculate the timing parameters turned out to be a good idea.
$ cvt 1920 1080 60
# 1920x1080 59.96 Hz (CVT 2.07M9) hsync: 67.16 kHz; pclk: 173.00 MHz
Modeline "1920x1080_60.00" 173.00 1920 2048 2248 2576 1080 1083 1088 1120 -hsync +vsync
$ xrandr -d :0 --newmode "try" 173.00 1920 2048 2248 2576 1080 1083 1088 1120 -hsync +vsync
$ xrandr -d :0 --addmode HDMI3 try
$ xrandr -d :0 --output HDMI3 --mode try
At this point I got a proper 1920x1080 on the screen, with black pixels as dark as when the display is powered off. The output of xrandr after this was somewhat unexpected, yet functionally what I wanted:
$ xrandr -d :0 --verbose
1280x720 (0x4b) 74.250MHz +HSync +VSync +preferred
h: width 1280 start 1390 end 1430 total 1650 skew 0 clock 45.00KHz
v: height 720 start 725 end 730 total 750 clock 60.00Hz
1920x1080 (0x141) 173.000MHz -HSync +VSync *current
h: width 1920 start 2048 end 2248 total 2576 skew 0 clock 67.16KHz
v: height 1080 start 1083 end 1088 total 1120 clock 59.96Hz
1920x1080 (0x10c) 148.500MHz +HSync +VSync
h: width 1920 start 2008 end 2052 total 2200 skew 0 clock 67.50KHz
v: height 1080 start 1084 end 1089 total 1125 clock 60.00Hz
[ ... ]
try (0x13e) 173.000MHz -HSync +VSync
h: width 1920 start 2048 end 2248 total 2576 skew 0 clock 67.16KHz
v: height 1080 start 1083 end 1088 total 1120 clock 59.96Hz
So the mode in effect didn’t turn out the one I generated (“try”), but a replica of its parameters, marked as 0x141 (and 0x13a on another occasion). This mode wasn’t there before.
I’m don’t quite understand how this happened. Maybe Cinnamon’s machinery did this. It kind of gets in the way all the time, and at times it didn’t let me set just any mode I liked with xrandr, so maybe that. This whole thing with graphics modes is completely out of control.
I should mention that there is no problem with sound in this mode (or any other situation I tried). Not that there should be, but at some point I thought maybe there would be, because the mode implies a computer and not a TV-something. But no issues at all. Actually, the screen’s loudspeakers are remarkably good, with a surprisingly present bass, but that’s a different story.
As for making this special mode permanent, that turned out to be a problem in itself. This post shows how I eventually solved it.
List of graphics modes
Just in case this interests anyone, this is the output of a full resolution list:
$ xrandr -d :0 --verbose
[ ... ]
HDMI3 connected primary 3840x2160+0+0 (0x1ba) normal (normal left inverted right x axis y axis) 1600mm x 900mm
Identifier: 0x48
Timestamp: -1469585217
Subpixel: unknown
Gamma: 1.0:1.0:1.0
Brightness: 1.0
Clones:
CRTC: 0
CRTCs: 0
Transform: 1.000000 0.000000 0.000000
0.000000 1.000000 0.000000
0.000000 0.000000 1.000000
filter:
EDID:
00ffffffffffff001e6da0c001010101
011d010380a05a780aee91a3544c9926
0f5054a1080031404540614071408180
d1c00101010104740030f2705a80b058
8a0040846300001e023a801871382d40
582c450040846300001e000000fd0018
781e871e000a202020202020000000fc
004c472054560a20202020202020012d
02035af1565f101f0413051403021220
212215015d5e6263643f403209570715
07505707016704033d1ec05f7e016e03
0c001000b83c20008001020304e200cf
e305c000e50e60616566eb0146d0002a
1803257d76ace3060d01662150b05100
1b304070360040846300001e00000000
0000000000000000000000000000008b
aspect ratio: Automatic
supported: Automatic, 4:3, 16:9
Broadcast RGB: Automatic
supported: Automatic, Full, Limited 16:235
audio: auto
supported: force-dvi, off, auto, on
3840x2160 (0x1ba) 297.000MHz +HSync +VSync *current +preferred
h: width 3840 start 4016 end 4104 total 4400 skew 0 clock 67.50KHz
v: height 2160 start 2168 end 2178 total 2250 clock 30.00Hz
4096x2160 (0x1bb) 297.000MHz +HSync +VSync
h: width 4096 start 5116 end 5204 total 5500 skew 0 clock 54.00KHz
v: height 2160 start 2168 end 2178 total 2250 clock 24.00Hz
4096x2160 (0x1bc) 296.703MHz +HSync +VSync
h: width 4096 start 5116 end 5204 total 5500 skew 0 clock 53.95KHz
v: height 2160 start 2168 end 2178 total 2250 clock 23.98Hz
3840x2160 (0x1bd) 297.000MHz +HSync +VSync
h: width 3840 start 4896 end 4984 total 5280 skew 0 clock 56.25KHz
v: height 2160 start 2168 end 2178 total 2250 clock 25.00Hz
3840x2160 (0x1be) 297.000MHz +HSync +VSync
h: width 3840 start 5116 end 5204 total 5500 skew 0 clock 54.00KHz
v: height 2160 start 2168 end 2178 total 2250 clock 24.00Hz
3840x2160 (0x1bf) 296.703MHz +HSync +VSync
h: width 3840 start 4016 end 4104 total 4400 skew 0 clock 67.43KHz
v: height 2160 start 2168 end 2178 total 2250 clock 29.97Hz
3840x2160 (0x1c0) 296.703MHz +HSync +VSync
h: width 3840 start 5116 end 5204 total 5500 skew 0 clock 53.95KHz
v: height 2160 start 2168 end 2178 total 2250 clock 23.98Hz
1920x1080 (0x1c1) 297.000MHz +HSync +VSync
h: width 1920 start 2008 end 2052 total 2200 skew 0 clock 135.00KHz
v: height 1080 start 1084 end 1089 total 1125 clock 120.00Hz
1920x1080 (0x1c2) 297.000MHz +HSync +VSync
h: width 1920 start 2448 end 2492 total 2640 skew 0 clock 112.50KHz
v: height 1080 start 1084 end 1094 total 1125 clock 100.00Hz
1920x1080 (0x1c3) 296.703MHz +HSync +VSync
h: width 1920 start 2008 end 2052 total 2200 skew 0 clock 134.87KHz
v: height 1080 start 1084 end 1089 total 1125 clock 119.88Hz
1920x1080 (0x16c) 148.500MHz +HSync +VSync
h: width 1920 start 2008 end 2052 total 2200 skew 0 clock 67.50KHz
v: height 1080 start 1084 end 1089 total 1125 clock 60.00Hz
1920x1080 (0x1c4) 148.500MHz +HSync +VSync
h: width 1920 start 2448 end 2492 total 2640 skew 0 clock 56.25KHz
v: height 1080 start 1084 end 1089 total 1125 clock 50.00Hz
1920x1080 (0x16d) 148.352MHz +HSync +VSync
h: width 1920 start 2008 end 2052 total 2200 skew 0 clock 67.43KHz
v: height 1080 start 1084 end 1089 total 1125 clock 59.94Hz
1920x1080i (0x10c) 74.250MHz +HSync +VSync Interlace
h: width 1920 start 2008 end 2052 total 2200 skew 0 clock 33.75KHz
v: height 1080 start 1084 end 1094 total 1125 clock 60.00Hz
1920x1080i (0x10d) 74.250MHz +HSync +VSync Interlace
h: width 1920 start 2448 end 2492 total 2640 skew 0 clock 28.12KHz
v: height 1080 start 1084 end 1094 total 1125 clock 50.00Hz
1920x1080 (0x1c5) 74.250MHz +HSync +VSync
h: width 1920 start 2008 end 2052 total 2200 skew 0 clock 33.75KHz
v: height 1080 start 1084 end 1089 total 1125 clock 30.00Hz
1920x1080 (0x1c6) 74.250MHz +HSync +VSync
h: width 1920 start 2448 end 2492 total 2640 skew 0 clock 28.12KHz
v: height 1080 start 1084 end 1089 total 1125 clock 25.00Hz
1920x1080 (0x1c7) 74.250MHz +HSync +VSync
h: width 1920 start 2558 end 2602 total 2750 skew 0 clock 27.00KHz
v: height 1080 start 1084 end 1089 total 1125 clock 24.00Hz
1920x1080i (0x10e) 74.176MHz +HSync +VSync Interlace
h: width 1920 start 2008 end 2052 total 2200 skew 0 clock 33.72KHz
v: height 1080 start 1084 end 1094 total 1125 clock 59.94Hz
1920x1080 (0x1c8) 74.176MHz +HSync +VSync
h: width 1920 start 2008 end 2052 total 2200 skew 0 clock 33.72KHz
v: height 1080 start 1084 end 1089 total 1125 clock 29.97Hz
1920x1080 (0x1c9) 74.176MHz +HSync +VSync
h: width 1920 start 2558 end 2602 total 2750 skew 0 clock 26.97KHz
v: height 1080 start 1084 end 1089 total 1125 clock 23.98Hz
1280x1024 (0x1b5) 108.000MHz +HSync +VSync
h: width 1280 start 1328 end 1440 total 1688 skew 0 clock 63.98KHz
v: height 1024 start 1025 end 1028 total 1066 clock 60.02Hz
1360x768 (0x4b) 85.500MHz +HSync +VSync
h: width 1360 start 1424 end 1536 total 1792 skew 0 clock 47.71KHz
v: height 768 start 771 end 777 total 795 clock 60.02Hz
1152x864 (0x1ca) 81.579MHz -HSync +VSync
h: width 1152 start 1216 end 1336 total 1520 skew 0 clock 53.67KHz
v: height 864 start 865 end 868 total 895 clock 59.97Hz
1280x720 (0x110) 74.250MHz +HSync +VSync
h: width 1280 start 1390 end 1430 total 1650 skew 0 clock 45.00KHz
v: height 720 start 725 end 730 total 750 clock 60.00Hz
1280x720 (0x111) 74.250MHz +HSync +VSync
h: width 1280 start 1720 end 1760 total 1980 skew 0 clock 37.50KHz
v: height 720 start 725 end 730 total 750 clock 50.00Hz
1280x720 (0x112) 74.176MHz +HSync +VSync
h: width 1280 start 1390 end 1430 total 1650 skew 0 clock 44.96KHz
v: height 720 start 725 end 730 total 750 clock 59.94Hz
1024x768 (0x113) 65.000MHz -HSync -VSync
h: width 1024 start 1048 end 1184 total 1344 skew 0 clock 48.36KHz
v: height 768 start 771 end 777 total 806 clock 60.00Hz
800x600 (0x115) 40.000MHz +HSync +VSync
h: width 800 start 840 end 968 total 1056 skew 0 clock 37.88KHz
v: height 600 start 601 end 605 total 628 clock 60.32Hz
720x576 (0x116) 27.000MHz -HSync -VSync
h: width 720 start 732 end 796 total 864 skew 0 clock 31.25KHz
v: height 576 start 581 end 586 total 625 clock 50.00Hz
720x576i (0x117) 13.500MHz -HSync -VSync Interlace
h: width 720 start 732 end 795 total 864 skew 0 clock 15.62KHz
v: height 576 start 580 end 586 total 625 clock 50.00Hz
720x480 (0x118) 27.027MHz -HSync -VSync
h: width 720 start 736 end 798 total 858 skew 0 clock 31.50KHz
v: height 480 start 489 end 495 total 525 clock 60.00Hz
720x480 (0x119) 27.000MHz -HSync -VSync
h: width 720 start 736 end 798 total 858 skew 0 clock 31.47KHz
v: height 480 start 489 end 495 total 525 clock 59.94Hz
640x480 (0x11c) 25.200MHz -HSync -VSync
h: width 640 start 656 end 752 total 800 skew 0 clock 31.50KHz
v: height 480 start 490 end 492 total 525 clock 60.00Hz
640x480 (0x11d) 25.175MHz -HSync -VSync
h: width 640 start 656 end 752 total 800 skew 0 clock 31.47KHz
v: height 480 start 490 end 492 total 525 clock 59.94Hz
720x400 (0x1cb) 28.320MHz -HSync +VSync
h: width 720 start 738 end 846 total 900 skew 0 clock 31.47KHz
v: height 400 start 412 end 414 total 449 clock 70.08Hz
So it even supports fallback mode with a 25.175 MHz clock if one really insists.