Making a video clip visualizing sound with libvisual on Ubuntu 14.04

Intro

The purpose of this mini-project was to create a video clip with visualized audio, instead of just a dull still frame. Libvisual is the commonly used graphics engine for Linux’ media players, but I wanted the result in a file, not on the screen.

Libvisual’s sources come with lv-tool, which is a command-line utility, apparently for testing the library and its plugins. It may send raw video to standard output, but as of March 2015 there is no plugin for getting the sound from standard input. So I hacked one together, compiled it, and used it with lv-tools (more about this below, of course).

Note to self: To resume, look for libvisual.git in L/linux/.

Installing required packages

The following installations were required on my machine (this may vary, depending on what you already have):

# apt-get install cmake g++
# apt-get install libpng-dev zlib1g-dev
# apt-get install autoconf
# apt-get install liborc-0.4-dev

Downloading & compiling libvisual

$ git clone https://github.com/Libvisual/libvisual.git libvisual
$ git checkout -b myown 4149d9bc1b8277567876ddba1c5415f4d308339d
$ cd libvisual/libvisual
$ cmake .
$ make
$ sudo make install
$ cd ../libvisual-plugins
$ cmake .
$ make
$ sudo make install

There is no particular reason why I checked out that specific commit ID, except a rather random attempt to solve a dependency issue (it was irrelevant, it turned out) and then forgot to switch back.

A trial run (not from stdin yet)

For help:

$ lv-tool -h

Listing plugins

$ lv-tool -p

A test run on a song: In one console, run

$ mplayer -af export=~/.mplayer/mplayer-af_export:512 song.mp3

This plays the song on the computer, and also allows libvisual access to the raw sound samples.

And on another console

$ lv-tool -i mplayer -a blursk -F 300 -f 5 -D 640x480 -d stdout > song.bin

Then play the clip with

$ ffplay -f rawvideo -video_size 640x480 -pix_fmt gray -framerate 5 song.bin

The frame rate was chosen as 5, and it can be increased, of course.

(Use “ffplay -pix_fmts” to get a list of supported pixel format, such as rgb8)

This isn’t all that good, because lv-tool generates video frames on the sound currently played. Even though it’s possible to sync the video with audio later on, there is no guarantee that this sync will remain — if the computer gets busy somewhere in the middle of rendering, lv-tool may stall for a short moment, and then continue with the sound played when it’s back. mplayer won’t wait, and lv-tools will make no effort to compensate for the lost frames — on the contrary, it should skip frames after stalling.

The stdin plugin

The idea behind the stdin plugin is so simple, that I’m quite sure libvisual’s developers actually have written one, but didn’t add it to the distribution to avoid confusion: All it does is reading samples from stdin, and supply a part of them as sound samples for rendering. As the “upload” method is called for every frame, it’s enough to make to consume the amount of sound samples that corresponds to the frame rate that is chosen when the raw video stream is converted into a clip.

The plugin can be added to libvisual’s project tree with this git patch. It’s made against the commit ID mentioned above, but it’s probably fine with later revisions. It doesn’t conform with libvisual’s coding style, I suppose — it’s a hack, after all.

Note that the patch is hardcoded for signed 16 bit, Stereo at 44100 Hz, and produces 30 fps. This is easily modified on the source’s #define statements at the top. The audio samples are supplied to libvisual’s machinery in buffers of 4096 bytes each, even though 44100 x 2 x 2 / 30 = 5880 bytes per frame at 30 fps — it’s common not supply all audio samples that are played. The mplayer plugin supplies only 2048 bytes, for example. This has a minor significance on the graphics.

After patching, re-run cmake, compilation and installation. Instead of reinstalling all, possibly copy the plugin manually into the required directory:

# cp libinput_stdin.so /usr/local/lib/x86_64-linux-gnu/libvisual-0.5/input/

The plugin should appear on “lv-tool -p” after this procedure. And hopefully work too. ;)

Producing video

The blursk actor plugin is assumed here, but any can be used.

First, convert song to WAV:

$ ffmpeg -i song.mp3 song.wav

Note that this is somewhat dirty: I should have requested a raw audio stream with the desired attributes as output, and ffmpeg is capable of doing it. But the common WAV file is more or less that, except for the header, which is skipped quickly enough.

Just make sure the output is stereo, signed 16 bit, 44100 Hz or set ffmpeg’s flags accordingly.

Create graphics (monochrome):

$ lv-tool -i stdin -a blursk -D 640x480 -d stdout > with-stdin.bin < song.wav

Mixing video with audio and creating a DIVX clip:

$ ffmpeg -f rawvideo -s:v 640x480 -pix_fmt gray -r 30 -i with-stdin.bin -ab 128k -b 5000k -i song.wav -vcodec mpeg4 -vtag DIVX try.avi

Same, but with colors (note the -c 32 and -pix_fmt):

$ time lv-tool -i stdin -c 32 -a blursk -D 640x480 -d stdout > color.bin < song.wav
$ ffmpeg -f rawvideo -s:v 640x480 -pix_fmt rgb32 -r 30 -i color.bin -ab 128k -b 5000k -i song.wav -vcodec mpeg4 -vtag DIVX color.avi

It’s also possible to use “24″ instead of “32″ above, but some actors will produce a black screen with this setting. They will also fail the same with 8 bits (grayscale).

And to avoid large intermediate .bin files, pipe from lv-tool to ffmpeg directly:

$ lv-tool -i stdin -c 32 -a blursk -D 640x480 -d stdout < song.wav | ffmpeg -f rawvideo -s:v 640x480 -pix_fmt rgb32 -r 30 -i - -ab 128k -b 5000k -i song.wav -vcodec mpeg4 -vtag DIVX clip.avi

This is handy in particular for high-resolution frames (HD and such).

The try-all script

To scan through all actors in libvisual-0.5, run the following script (produces 720p video, or set “resolution”):

#!/bin/bash

song=$1
resolution=1280x720

for actor in blursk bumpscope corona gforce infinite jakdaw jess \
             lv_analyzer lv_scope oinksie plazma ; do
  lv-tool -i stdin -c 24 -a $actor -D $resolution -d stdout < $song | \
    ffmpeg -f rawvideo -s:v $resolution -pix_fmt rgb24 -r 30 -i - \
      -ab 128k -b 5000k -i $song -vcodec mpeg4 -vtag DIVX ${actor}_color.avi

  lv-tool -i stdin -c 8 -a $actor -D $resolution -d stdout < $song | \
    ffmpeg -f rawvideo -s:v $resolution -pix_fmt gray -r 30 -i - \
      -ab 128k -b 5000k -i $song -vcodec mpeg4 -vtag DIVX ${actor}_gray.avi

done

It attempts both color and grayscale.

Fixing the mouse sensitivity on Gnome 2

This related to my Fedora 12 machine with a Logitech M705 mouse. It had a generally bad feeling, I would say.

This is actually written on this post already, with some more details on this one, but I prefer having my own routine and final values written down.

So first get a list of input devices:

$ xinput list
⎡ Virtual core pointer                        id=2    [master pointer  (3)]
⎜   ↳ Virtual core XTEST pointer                  id=4    [slave  pointer  (2)]
⎜   ↳ Microsoft Microsoft 5-Button Mouse with IntelliEye(TM)    id=6    [slave  pointer  (2)]
⎜   ↳ HID 04f3:0103                               id=7    [slave  pointer  (2)]
⎜   ↳ Logitech USB Receiver                       id=9    [slave  pointer  (2)]
⎣ Virtual core keyboard                       id=3    [master keyboard (2)]
 ↳ Virtual core XTEST keyboard                 id=5    [slave  keyboard (3)]
 ↳ Power Button                                id=12    [slave  keyboard (3)]
 ↳ Power Button                                id=13    [slave  keyboard (3)]
 ↳ USB  AUDIO                                  id=14    [slave  keyboard (3)]
 ↳ HID 04f3:0103                               id=8    [slave  keyboard (3)]
 ↳ Logitech USB Receiver                       id=10    [slave  keyboard (3)]

Then get the properties of the USB mouse. Since the string “Logitech USB Receiver” refers to a keyboard input as well as a mouse input, this has to be disambiguated with a pointer: prefix to the identifier. Or just use the ID (not safe on a script, though):

So

$ xinput list-props 9

and

$ xinput list-props pointer:"Logitech USB Receiver"

give the same result, given the list of input devices above.

The output:

$ xinput list-props pointer:"Logitech USB Receiver"
Device 'Logitech USB Receiver':
 Device Enabled (131):    1
 Device Accel Profile (264):    0
 Device Accel Constant Deceleration (265):    1.000000
 Device Accel Adaptive Deceleration (267):    1.000000
 Device Accel Velocity Scaling (268):    10.000000
 Evdev Reopen Attempts (269):    10
 Evdev Axis Inversion (270):    0, 0
 Evdev Axes Swap (272):    0
 Axis Labels (273):    "Rel X" (139), "Rel Y" (140)
 Button Labels (274):    "Button Left" (132), "Button Middle" (133), "Button Right" (134), "Button Wheel Up" (135), "Button Wheel Down" (136), "Button Horiz Wheel Left" (137), "Button Horiz Wheel Right" (138), "Button Side" (283), "Button Extra" (284), "Button Forward" (1205), "Button Back" (1206), "Button Task" (1207), "Button Unknown" (249), "Button Unknown" (249), "Button Unknown" (249), "Button Unknown" (249), "Button Unknown" (249), "Button Unknown" (249), "Button Unknown" (249), "Button Unknown" (249), "Button Unknown" (249), "Button Unknown" (249), "Button Unknown" (249), "Button Unknown" (249)
 Evdev Middle Button Emulation (275):    2
 Evdev Middle Button Timeout (276):    50
 Evdev Wheel Emulation (277):    0
 Evdev Wheel Emulation Axes (278):    0, 0, 4, 5
 Evdev Wheel Emulation Inertia (279):    10
 Evdev Wheel Emulation Timeout (280):    200
 Evdev Wheel Emulation Button (281):    4
 Evdev Drag Lock Buttons (282):    0

It turns out, that the required change on my machine was

$ xinput set-prop pointer:"Logitech USB Receiver" "Device Accel Adaptive Deceleration" 3

This is not what I expected to do — it slows down the pointer’s movement when the mouse moves slowly. Surprisingly enough, this makes the pointing more intuitive, because hitting that exact spot requires more physical motion, and mouse doesn’t get stuck on millimeters.

As the said post mentions, these settings won’t survive a session restart. But that’s a rare event on my computer. Anyhow, the method suggsted for making it persistent is to add a small script as a starter application. To do this, prepare a small script doing the required setup, and add it as a starter script with

$ gnome-session-properties &

Or, maybe the correct way is to add/edit ~/.xinitrc or ~/.xprofile? Will figure that out when I logout next time (happens once in a few months…).

 

Vivado 2014.1 eliminating instantiations of IP (black boxes)

I discovered this problem in a project that instantiated a 512-bit wide FIFO consisting many (>16) times in different modules. For some unknown reason (it’s called a bug, I suppose) Vivado treated the instantiation as if it wasn’t there, and optimized all surrounding logic as if the black box’ output ports were all zero. For a lower number of instantiations, Vivado handled the instantiation as expected.

I should point out that Vivado did issue synthesis warnings related to the instantiation as if it was OK (e.g. mismatches between port widths and the wires applied), and yet, there was no trace of these instantiation in the post-synthesis netlist view.

In Vivado, the cores from the IP Catalog is treated as a black box module: The IP is typically first compiled into a DCP, and then a black-box module (empty Verilog module, for example) is used to represent it during the synthesis stage. The DCP is then fused into the project during the implementation (like ngdbuild in ISE).

One clue that this happens takes the form of a critical warning from the implementation stage saying something like

CRITICAL WARNING: [Designutils 20-1280] Could not find module 'fifo_wide'. The XDC file /path/to/fifo_wide/fifo_wide/fifo_wide.xdc will not be read for any cell of this module.

Another way to tell this has taken place is to look in the synthesis’ runme.log file (as in vivado-project/project.runs/synth_1/runme.log). The black boxes are listed in the “Report BlackBoxes” section, and each of their instantiation in “Report Cell Usage”. So if the instantiated module doesn’t appear at all in the former, or not enough times in the latter — this is a clear indication something went wrong.

Workaround

After trying out a lot of things, the workaround was to define two IP cores — fifo_wide_rd and fifo_wide_wr — which are identical. The root of them problem seems to have been that the same FIFO was used in two different modules (one that writes from a DDR memory, and one that reads). Due to the different usage contexts and the huge amount of logic involved, it seems like the tools messed up trying to optimize things.

So using one core for the write module and one for the read module got the tools back on track. This is of course no sensible reason to use different cores in different modules, except for a bug in Vivado.

I should mention, that another FIFO is instantiated 20 times in the design, also from two different modules, and nothing bad happened there. However its width is only 32 bits.

Failed attempt

This solves the problem on synthesis, but not all the way through. I left it here just for reference.

The simple solution is to tell Vivado not to attempt optimizing anything related to this module. For example, if the instance name is fifo_wide_inst, the following line in any of the XDC constraints files will do:

set_property DONT_TOUCH true [get_cells -hier -filter {name=~*/fifo_wide_inst}]

This should be completely harmless, as there’s nothing to optimize anyhow — the logic is already optimized inside the DCP. It may be a good idea to do this to all instantiations, just to be sure.

What actually happened with this constraint is that many groups of twenty BUF elements (not IBUF or something like that. Just BUF), named for example ‘bbstub_dout[194]__xx’ (xx going from 1 to 20) were created in the netlist. All had non-connected inputs and the outputs of all twenty buffers connected to the same net. So obviously, nothing good came out of this. The fifo_wide_inst block was non-existent in the netlist, even though twenty instances of it appeared in the synthesis’ runme.log file.

So there were twenty groups of bbstubs for each of the 512 wires of the FIFO, and this applied for each of the twenty modules on which one of these FIFOs was instantiated. No wonder the implementation took a lot of time.

Vivado 2014.1 / Linux Ubuntu 14.04 license activation notes

Introduction

After installing Vivado 2014.1 on my laptop running Ubuntu 14.04 (64 bits), I went for license activation. All I wanted was a plain node-locked license. Not a server, and not a floating one. Baseline.

Xilinx abandoned the good old certificate licensing in favor of activation licensing. That is causing some headaches lately…

Going through the process, I had several problems. The most commonly reported is that when one enters the web page on which the license should be generated (see image below), the activation region is greyed out. A “helpful” message next to the greyed area gives suggestions on why this area is disabled: Either a license has already been submitted based upon that certain request ID, or the page was entered directly, and not through Vivado Licensing Manager.

But there’s another important possible reason: The request is maybe invalid. In particular because the computer’s identification is lacking.

If this is the case, there is no special error message for this. Just that “important information” note. See “What the problem was” below.

Xilinx' licensing page, activation area greyed outAs a side note: Ubuntu 14.04 is not in the list of supported OS’s, but that’s what I happen to have. Besides, the problem wasn’t with the OS, it turned out.

The activation process (in brief)

It seems like the whole idea about this activation process is that the licensing file that is returned from Xilinx won’t be usable more than once. So instead of making the licensing file valid for a computer ID, it’s made valid with respect to a request ID. Hence the licensing tools on the user’s computer first needs to prepare itself for receiving a licensing file by creating some random data and call it a request ID. That data is conveyed to the licensing web server (Xilinx’ server, that is) along with information about the machine.

The licensing server creates a licensing file, which corresponds to the request ID, enabling the licensed features the user requested on the site. The user feeds this licensing file into the licensing tools (locally on its computer), which match the request ID in its own records with the one in the licensing file. If there is a match, it makes a note to itself that the relevant features are activated. Also, it deletes the information about that request ID from its records.

The database containing the requests and what features are enabled is kept in the “trusted area”. Which is a fine name for some obscured database.

In practice, the process goes as follows: When clicking “Connect Now”, Xilinx licensing client on your computer collects identifying information about your computer, and creates some mumbo-jumbo hex data hashes to represent that information + creates a request ID. It then stores this information in the computer’s own “trusted area” (which must be generated manually prior to this on a Linux machine) so it remembers this request when its response returns.

It then opens a web browser (looks like it just tries running Google Chrome first and then Firefox) with an URL that contains those mumbo-jumbo hex hashes. That eventually leads to that famous licensing page. The user checks some licensing features, a licensing file is generated (and those features are counted as “taken” on the site).

The thing is, that in order to create an activation license, the web server needs those mumbo-jumbo hashes in the URL, so it knows which request ID it works against. Also, if a request ID has already been used to create a license, it can’t be reused, because the licensing tools at the user’s side may have deleted the information about that request ID after accepting the previous licensing file.

What the problem was

The reason turned out to be that my laptop lacks a wired Ethernet NIC, but has only a wireless LAN interface. The FLEXnet license manager obviously didn’t consider wlan0 to be an eligible candidate for supplying the identifying MAC number (even though it’s an Ethernet card for all purposes), so the request that was generated for the computer was rejected.

This can be seen in the XML file that is generated by the command-line tools (see below) in the absence of any identifying method:

<UniqueMachineNumbers>
<UniqueMachineNumber><Type>1</Type><Value></Value></UniqueMachineNumber>
<UniqueMachineNumber><Type>2</Type><Value></Value></UniqueMachineNumber>
<UniqueMachineNumber><Type>4</Type><Value></Value></UniqueMachineNumber>
</UniqueMachineNumbers>

Compare this with after adding a (fake) NIC, as shown below:

<UniqueMachineNumbers>
<UniqueMachineNumber><Type>1</Type><Value></Value></UniqueMachineNumber>
<UniqueMachineNumber><Type>2</Type><Value>51692BAD76FCCBBFAA0D635F0CA3674E0F7FADBC</Value></UniqueMachineNumber>
<UniqueMachineNumber><Type>4</Type><Value></Value></UniqueMachineNumber>
</UniqueMachineNumbers>

But these XML files aren’t really used. What counts is the URL that is used to enter Xilinx site.

Without any identifying means, it looks like this (important part marked in read):

<META HTTP-EQUIV="Refresh" CONTENT="0; URL=http://license.xilinx.com/getLicense?group=esd_oms&os=lin64&version=2014&licensetype=4&ea=&ds=&di=&hn=&umn1=&umn2=&umn4=&req_hash=297B4710327A0F933FF3382961787271D94FE8CD&uuid=961710372713387A02297B4F3F78F93D924FE8CD&isserver=0&sqn=1&trustedid=1&machine_id=E83C0C895A751459C7449FF5ABFC083849233D7A&revision=DefaultOne&revisiontype=SRV&status=OK&isvirtual=0">

And a proper URL like this:

<META HTTP-EQUIV="Refresh" CONTENT="0; URL=http://license.xilinx.com/getLicense?group=esd_oms&os=lin64&version=2014&licensetype=4&ea=&ds=&di=&hn=&umn1=&umn2=51692BAD76FCCBBFAA0D635F0CA3674E0F7FADBC&umn4=&req_hash=8BD92BFBA481BFD3CA64EF6DB30133A24CA961D5&uuid=8BDCA113ABFD64EF6D392BFBA483A24CB30961D5&isserver=0&sqn=1&trustedid=1&machine_id=483F4E15B8491F0482A56C0E253B8F9D78DCD114&revision=DefaultOne&revisiontype=SRV&status=OK&isvirtual=0">

So quite evidently, the UniqueMachineNumber elements in the XML file appear as unm1, unm2 and unm4 CGI variables in the URL. They’re all empty string for the URL that caused the greyed out authentication region.

So fake a NIC

Since the laptop really doesn’t have a wired Ethernet card, let’s fake one assign it a MAC address:

# /sbin/ip tuntap add dev eth0 mode tap
# /sbin/ifconfig eth0 up
# /sbin/ip link set dev eth0 address 11:22:33:44:55:66

(pick any random MAC address, of course)

The quick and dirty way to get this running on every bootup was to add it to /etc/rc.local on my machine. The more graceful way would be to create an upstart script executing on network activation. But I’ve had enough already…

By the way, I picked eth1 on my own computer, because eth0 is used by my Ethernet-over-USB device. Works the same.

If “Connect Now” does nothing

Even though Vivado started off apparently OK, Vivado License Manager refused to open a browser window for obtaining a license on Ubuntu 14.04: I clicked the “Connect Now”, but nothing happened. Some extra packages were installed and it fixed it. Not clear if all are necessary:

# apt-get install libgnomevfs2-0 libgnome2-0
# apt-get install lib32z1 lib32ncurses5 lib32bz2-1.0

As usual, strace was used to find out that this was the problem.

Dec 2018 update: Running Vivado 2015.2 on a Mint 19 machine, I got a new error message:

/usr/lib/firefox/firefox: /path/to/Vivado/2015.2/lib/lnx64.o/libstdc++.so.6: version `GLIBCXX_3.4.21' not found (required by /usr/lib/firefox/firefox)
/usr/lib/firefox/firefox: /path/to/Vivado/2015.2/lib/lnx64.o/libstdc++.so.6: version `GLIBCXX_3.4.21' not found (required by /usr/lib/firefox/firefox)

Apparently, opening Firefox from within Vivado caused it to use Vivado’s C++ runtime library, which was too old for it. Simple fix:

$ cd /path/to/Vivado/2015.2/lib/lnx64.o/
$ mv libstdc++.so.6 old-libstdc++.so.6
$ ln -s /usr/lib/x86_64-linux-gnu/libstdc++.so.6

Installing FLEX license manager

This is partly documented in Xilinx’ installation guide. It has to be done once before attempting to acquire a license.

First and foremost, clean up previous installation, in case you’ve been struggling with this for a while already. The license manager keeps its file in these directories. Just delete (or move to another directory) the following directories, to get a fresh start

  • /tmp/FLEXnet (empty files with UUID-like file names)
  • /usr/local/share/macrovision
  • /usr/local/share/FNP
  • /usr/local/share/applications/.com.flexnetlicensing
  • ~/.Xilinx/*.lic (in particular ~/.Xilinx/trial.lic). Not sure if this is related.

Having this done, become root (or use sudo) and run install_fnp.sh. This is what it looked like when I did this based on what was installed along with Vivado 2014.1:

# software/xilinx/Vivado/2014.1/ids_lite/ISE/bin/lin64/install_fnp.sh ./software/xilinx/Vivado/2014.1/bin/unwrapped/lnx64.o/FNPLicensingService
Installing anchor service from ./software/xilinx/Vivado/2014.1/bin/unwrapped/lnx64.o/FNPLicensingService to /usr/local/share/FNP/service64/11.11.0

Checking system for trusted storage area...
Configuring for Linux, Trusted Storage path /usr/local/share/macrovision/storage...
Creating /usr/local/share/macrovision/storage...
Setting permissions on /usr/local/share/macrovision/storage...
Permissions set...

Checking system for Replicated Anchor area...
Configuring Replicated Anchor area...
Creating Replicated Anchor area...
Setting permissions on Replicated Anchor area...
Replicated Anchor area permissions set...
Configuring Temporary area...
Temporary area already exists...
Setting permissions on Temporary area...
Temporary area permissions set...
Configuration completed successfully.

Working with latest library tool chain

As the latest version of Vivado was 2014.4 at the time, I downloaded Vivado 2014.4′s license manager tools. The rationale was that maybe the interaction with the site had changed. With hindsight, it would probably be OK to use 2014.1′s licensing tools, but this is how I eventually got it working.

I extracted the zipfile into ~/software/xilinx/licensing_tools/linux_flexlm_v11.11.0_201410/.

Then went to the lnx64.o, ran install_fnp.sh again as root and verified that there are no pending requests:

$ ./xlicclientmgr -l
ERROR: flxActCommonInit result 2   .
Exit(2) FLEXnet initialisation error.

The reason for this error was not finding the libXLicClientMgrFNP.so library, which is in the same directory (strace saved the day again).

The quick and dirty solution is to add the current directory to the library search path (this works only if it’s done in the directory the library is in):

$ export LD_LIBRARY_PATH=$(pwd)

And then prepare a request:

 ./xlicclientmgr -cr ~/Desktop/newrequest.xml
Request written to /home/eli/Desktop/newrequest.xml
Request (html) written to /home/eli/Desktop/newrequest.html

The tools indeed remember that they have a pending request:

$ ./xlicclientmgr -l
 SeqNo Status     Date       Time  Reference
     1 Pending    2015-01-19 18:45 ""

Listed 1 of 1 composite requests.

Then double-clicked newrequest.html to get a license file.

With the XML file that was emailed back:

$ ./xlicclientmgr -p ~/Desktop/Xilinx_License.xml
Response processed successfully. Actions were:
    Create         fulfillment "215469875"

    FLEXnet response dictionary:
                   COMMENT=This activation license file is generated on Tue Jan 20 16:26:40 UTC 2015
$ ./xlicclientmgr -l

No stored composite requests.

(but there was one listed before using the response).

MuseScore notes on Fedora Core 12

Random notes playing with MuseScore 0.9.6 (pun not intended):

Installation (after grabbing the RPM file from the web):

# yum install --nogpgcheck MuseScore-0.9.6-1.fc12.x86_64.rpm

Which installs the file with no signature. Somewhat dangerous in theory, as the RPM could in theory contain malicious code (as if a signature helps in this case).

The command line for kicking it off is

$ mscore &

Crashes

MuseScore may enter an infinite memory hogging loop, ending up with a system global OOM and a lot of disk activity. To keep this mishap’s impact small, allow it no more than 2 GB of virtual memory, for example. It will never need nearly as much as that, and once it gets into this “all memory is mine” loop, it gets a kick in the bottom, and that’s it. So before calling mscore, go

$ ulimit -v 2048000

and possibly check it with

$ ulimit -a

Note that this limits any program running from the same shell.

Editing notes (to self, that is)

  • Selection: In no-note-writing mode, press Shift and mark an area. It’s also possible to mark a note, and shift-click the last note to select (including from the beginning to end).
  • Beaming: That’s the name of connecting eighth and sixteenth notes with those lines. Look for “beam properties” in the palette to get separate notes, as commonly written in notes for singing.
  • In the Display menu, select Play, Mixer and Synthesizer panels, to control sound and played tempo. Note that the mixer panel remains in place when closing and opening files, but it becomes dysfunctional at best after that. Just reopen the panel after reloading or such.

Hearing something

To get some audio playing, given errors like this on startup

Alsa_driver: the interface doesn't support mmap-based access.
init ALSA audio driver failed
init ALSA driver failed
init audio driver failed
sequencer init failed

go to Edit > Preferences, pick I/O, choose ALSA Audio only, and set the Device from “default” to “hw:0″.

But ehm, there’s a problem: Musescore requests exclusive access to the sound device, so if anything else happens to be producing sound when Musescore starts, it will fail to initialize its sound interface (and therefore not play anything during that session). And if it manages to grab the soundcard, all other programs attempting to play sound get stuck. This is true even when using a TCP socket to connect to the PulseAudio server.

Portaudio doesn’t make things better. To begin with, it’s a bit confusing, as the API and device entries are empty. But just select it and click “OK” and these become populated after a restart of the program. Not graceful, but it works. Anyhow, picking the ALSA API and the hw:0,0 device (which is my sound card) gets the same result as with ALSA directly, minus I can’t control the volume with the Pulseaudio controls. But the card is still grabbed exclusively, messing up other programs.

Portaudio with OSS didn’t work either, despite running mscore with padsp. No devices appeared in the list.

Loading the OSS compatible driver (modprobe snd-pcm-oss) created a /dev/dsp file indeed, but again, the sound card was exclusively taken.

My ugly solution was to find a couple of USB speakers and plug them in. And use hw:2,0 as the ALSA target in Musescore.

The elegant solution would be to create a bogus hardware card in Pulseaudio, that routes all sound to hw:0,0. I’m sure it’s possible. I’m also sure that I’ve wasted enough time on this nonsense.

Reprogramming a Series-7 MMCM for fractional division ratios

Introduction

Xilinx’ Series-7 FPGAs (Virtex-7, Kintex-7, Atrix-7 and Zynq-7000) offer a rather flexible frequency synthesizer, (MMCE2) allowing steps of 0.125 for setting the VCO’s multiplier and one of its dividers. The MMCE can be reprogrammed through its DRP interface, so it can be used as a source of a variable clock frequency.

These are a few notes taken while implementing a reprogrammable frequency source using the MMCE2_ADV.

Resources

The main resource on this matter is Xilinx’ application note 888 (XAPP888) as well as the reference design that can be downloaded from Xilinx’ web site.

As of XAPP888 v1.3 (October 2014), there are a few typos:

  • Table 6: PHASE_MUX_F_CLKFB is on bits [13:11] and not [15:13]
  • Table 6: FRAC_WF_F_CLKFB is on bit 10 and not 12.
  • Table 7: FRAC_EN is related to CLKFBOUT, and not CLKOUT0

The reference design is written in synthesizable Verilog, but the parts that calculate the assignments to the DRP registers are written as Verilog functions, so they can’t be used as is for an arbitrary frequency clock generator. To make things even trickier, the coding style employed in this reference looks like a quiz in deciphering obscured code (or just a Verilog parody).

As a result, it’s somewhat difficult to obtain functional logic (or possibly a computer program) for setting the registers correctly for any allowed combination of parameters. The notes below may assist in getting things straight.

A sample set of DRP registers

For reference, an MMCE was implemented on a Kintex-7 device, after which the entire DRP space was read.

The instantiation of this MMCE was

MMCME2_ADV
 #(.BANDWIDTH          ("OPTIMIZED"),
 .CLKOUT4_CASCADE      ("FALSE"),
 .COMPENSATION         ("ZHOLD"),
 .STARTUP_WAIT         ("FALSE"),
 .DIVCLK_DIVIDE        (1),
 .CLKFBOUT_MULT_F      (5.125),
 .CLKFBOUT_PHASE       (0.000),
 .CLKFBOUT_USE_FINE_PS ("FALSE"),
 .CLKOUT0_DIVIDE_F     (40.250),
 .CLKOUT0_PHASE        (0.000),
 .CLKOUT0_DUTY_CYCLE   (0.500),
 .CLKOUT0_USE_FINE_PS  ("FALSE"),
 .CLKIN1_PERIOD        (5.0),
 .REF_JITTER1          (0.010)

The register map:

 00:  a600 0082 0003 0000 0127 9814 0041 0c40
 08:  14d3 2c00 0041 0040 0041 0040 0041 0040
 10:  0041 0040 0041 2440 1081 1880 1041 1041
 18:  03e8 3801 bbe9 0000 0000 0210 0000 01e9
 20:  0000 0000 0000 0000 0000 0000 0000 0000
 28:  9900 0000 0000 0000 0000 0000 0000 0000
 30:  0000 0000 0000 0000 0000 0000 0000 0000
 38:  0000 0000 0000 0000 0000 0000 0000 0000
 40:  0000 0000 8080 0000 0000 0800 0001 0000
 48:  0000 7800 01e9 0000 0000 0000 9108 1900

(Note to self: Use “predesign” git bundle, checkout e.g. ’138358c’, run build TCL script on Vivado 2014.1 and then on PC compile and run ./dump_drp_regs)

Fractional divider register settings

Two dividers in each MMCE2 allow a fractional division ratio: The feedback divider (CLKFBOUT_MULT_F, effectively the clock multiplier) and the output divider for one clocks (CLKOUT0_DIVIDE_F).

The reference design assign correct values in the relevant registers, but is exceptionally difficult to decipher.

The algorithm for calculating the register’s value is the same for CLKFBOUT_MULT_F and CLKOUT0_DIVIDE_F. The values obtained for all registers, except high_time and low_time, depend only on (8x mod 16), where x is either CLKFBOUT_MULT_F or CLKOUT0_DIVIDE_F, given as the actual division ratio.

The values of the registers as set by Vivado are given for the division ratio going from 4.000 to 5.875, in steps of 0.125. (high_time and low_time shown below may appear not to agree with this, but these are the actual numbers).

frac_en high_time low_time edge frac phase_mux_f frac_wf_r frac_wf_f
0 2 2 0 0 0 0 0
1 1 1 0 1 0 1 0
1 1 1 0 2 1 1 1
1 1 1 0 3 1 1 1
1 1 1 0 4 2 1 1
1 1 1 0 5 2 1 1
1 1 1 0 6 3 1 1
1 1 1 0 7 3 1 1
0 2 3 1 0 0 0 0
1 2 1 1 1 4 0 1
1 2 2 1 2 5 0 0
1 2 2 1 3 5 0 0
1 2 2 1 4 6 0 0
1 2 2 1 5 6 0 0
1 2 2 1 6 7 0 0
1 2 2 1 7 7 0 0

Loop filter and lock parameters

Depending on the feedback divider’s integer value (MULT_F in the table below), several registers, which are related to the lock detection and the loop filter, are set with values taken from a lookup table in the reference design. Comparing the values assigned by Vivado 2014.1 (again, by reading back the DRP registers) with those in the reference design for a selected set of MULT_Fs reveals a match as far as the lock detection registers are concerned. However the registers related to the digital loop filter were set to completely different values by Vivado. As there is no documentation available on these registers, it’s not clear what impact this difference has, if at all.

The following table shows the values assigned by Vivado 2014.1 for a set of MULT_F’s. The rightmost columns show the bits of of the loop filter bits, in the same order that they appear in the reference design (MSB to LSB, left to right). All other columns are given in plain decimal notation.

MULT_F LockRefDly LockFBDly LockCnt LockSatHigh UnlockCnt x x x x x x x x x x
4 11 11 1000 1001 1 0 1 1 1 0 1 1 1 0 0
8 22 22 1000 1001 1 1 1 1 1 0 0 1 1 0 0
12 31 31 825 1001 1 1 1 0 1 0 0 0 1 0 0
16 31 31 625 1001 1 1 1 1 1 1 0 0 1 0 0
20 31 31 500 1001 1 1 1 0 0 0 0 0 1 0 0
24 31 31 400 1001 1 0 1 0 1 1 1 0 0 0 0
28 31 31 350 1001 1 0 0 1 1 0 1 0 0 0 0
32 31 31 300 1001 1 0 0 1 1 0 1 0 0 0 0

Sporadic tests with setting these registers as if the MULT_F was completely different (e.g. as if MULT_F=64 for much lower actual setting) reveals that nothing apparent happens — no loss of locks, and no apparent difference in jitter performance (not measured though). Also, the VCO of the tested FPGA (with speed grade 2) remained firmly locked at frequencies going as low as 20 MHz (using non-fractional ratios) and as high as 3000 MHz (even though the datasheet ensures 600-1440 MHz only). This was run for several minutes on each frequency with a junction temperature of 56°C.

All in all, there’s an uncertainty regarding the loop filter parameters, but there’s reason to hope that this has no practical significance.

Linux/Gnome: When selection of text goes inactive as soon as the mouse button is released

… go though any gitk window on the desktop, and click on it, to release it from some unexpected GUI state.

Just wanted that written down for the next time I try to select a segment in XEmacs or the gnome-terminal window, and the selection goes away as I release the mouse button.

Debian package notes (when apt and Automatic Updater in Ubuntu isn’t good enough)

Just a few jots on handling packages in Ubuntu. This post is a true mess.

Pinning

The bottom line seems to be not to use the Software Updater, but instead go

# apt-get upgrade

How to prevent certain packages from being updated, based upon this Pinning Howto page and the Apt Preferences page which cover the internals as well.

There also the manpage:

$ man apt_preferences

Repositories

The repositories known by apt-get are listed in /etc/apt/sources.list.d/ and /etc/apt/sources.list. For example, adding a repository:

# add-apt-repository ppa:vovoid/vsxu-release

Removing a repository e.g.

# add-apt-repository --remove ppa:vovoid/vsxu-release

and always do

# apt-get update

after changing the repository set. Now, you might get something like

E: The repository 'http://ppa.launchpad.net/...' is not signed.
N: Updating from such a repository can't be done securely, and is therefore disabled by default

If you want to insist on using such repository (at your own risk), go

# apt update --allow-insecure-repositories

and may the force be with you.

Among others, adding the repository above creates /etc/apt/sources.list.d/vovoid-vsxu-release-trusty.list saying

deb http://ppa.launchpad.net/vovoid/vsxu-release/ubuntu trusty main
# deb-src http://ppa.launchpad.net/vovoid/vsxu-release/ubuntu trusty main

“trusty” refers to Ubuntu 14.04, of course.

Look at this page for more info.

Checking what apt-get would install

# apt-get -s upgrade | less

The packages related to the Linux kernel: linux-generic linux-headers-generic linux-image-generic

It’s worth looking here on  this regrading what packages “kept back means” (but the bottom line is that these packages won’t be installed).

Being open to suggestions

Kodi, for example, has a lot of “side packages” that are good to install along. This is how to tell apt-get to grab them as well:

# apt-get install --install-suggests kodi

Pinning with dpkg

This doesn’t work with apt-get nor Automatic Updater, following this and this web pages:

List all packages

$ dpkg -l

Wildcards can be used to find specific packages. For example, those related to the current kernel:

$ dpkg -l "*$(uname -r)*"
Desired=Unknown/Install/Remove/Purge/Hold
| Status=Not/Inst/Conf-files/Unpacked/halF-conf/Half-inst/trig-aWait/Trig-pend
|/ Err?=(none)/Reinst-required (Status,Err: uppercase=bad)
||/ Name                                          Version                     Architecture                Description
+++-=============================================-===========================-===========================-===============================================================================================
ii  linux-headers-3.13.0-35-generic               3.13.0-35.62                amd64                       Linux kernel headers for version 3.13.0 on 64 bit x86 SMP
ii  linux-image-3.13.0-35-generic                 3.13.0-35.62                amd64                       Linux kernel image for version 3.13.0 on 64 bit x86 SMP
ii  linux-image-extra-3.13.0-35-generic           3.13.0-35.62                amd64                       Linux kernel extra modules for version 3.13.0 on 64 bit x86 SMP

Or, to get just the package names:

$ dpkg -l | awk '{ print $2; }' | grep "$(uname -r)"

Pinning a package

Aug 2019 update: Maybe with apt-mark? Haven’t tried that yet.

In order to prevent a certain package from being updated, use the “hold” setting for the package. For example, holding the kernel related package automatically (all three packages) as root:

# dpkg -l | awk '{ print $2; }' | grep "$(uname -r)" | while read i ; do echo $i hold ; done | dpkg --set-selections

After this, the listing of these packages is:

$ dpkg -l "*$(uname -r)*"
Desired=Unknown/Install/Remove/Purge/Hold
| Status=Not/Inst/Conf-files/Unpacked/halF-conf/Half-inst/trig-aWait/Trig-pend
|/ Err?=(none)/Reinst-required (Status,Err: uppercase=bad)
||/ Name                                          Version                     Architecture                Description
+++-=============================================-===========================-===========================-===============================================================================================
hi  linux-headers-3.13.0-35-generic               3.13.0-35.62                amd64                       Linux kernel headers for version 3.13.0 on 64 bit x86 SMP
hi  linux-image-3.13.0-35-generic                 3.13.0-35.62                amd64                       Linux kernel image for version 3.13.0 on 64 bit x86 SMP
hi  linux-image-extra-3.13.0-35-generic           3.13.0-35.62                amd64                       Linux kernel extra modules for version 3.13.0 on 64 bit x86 SMP

Indeed, the “h” notes that the packages are held. To revert this, use “install” instead of “hold” in the input to dpkg –set-selections above.

Which package provides file X?

Following this page, install apt-file (simply with apt-get install apt-file), go “apt-file update” once and then go something like (not necessarily as root):

$ apt-file find libgnome-2.so

Note that the pattern can be a substring (as in the example above).

What files does package X generate?

$ dpkg -L libpulse-dev

Installing a deb file locally

# dpkg -i thepackage.deb

If there are failed dependencies, fix them with apt-get subsequently:

# apt-get -f install

and if it says that it wants to remove the package you tried to install, go

# apt-get install -f --fix-missing

That will probably not help directly, but odds are apt-get will at least explain why it wants to kick out the package.

To make apt ignore a failed post-installation script, consider this post.

Extracting the files from a repository

This can be used for running more than one version of Google Chrome on a computer. See this post for a few more words on this.

Extract the .deb package:

$ ar x google-chrome-stable_current_amd64.deb

Note that the files go into the current directory (yuck).

Extract the package’s files:

$ mkdir files
$ cd files
$ tar -xJf ../data.tar.xz

Extract the installation scripts:

$ mkdir scripts
$ cd scripts/
$ tar -xJf ../control.tar.xz

A word on repositories

Say that we have a line like this in /etc/apt/sources.list:

deb http://archive.ubuntu.com/ubuntu xenial main universe updates restricted security backports

It tells apt-update to go to http://archive.ubuntu.com/ubuntu/dists/ and look into xenial/main for the “main” part, xenial/universe for the “universe” part but e.g. xenial-updates/ for the “updates”. This site help with a better understanding on how a sources.list file is set up.

If we look at e.g. ubuntu/dists/xenial/main/, there’s a binary-amd64/ subdirectory for the amd64 platforms (64-bit Intel/AMD). That’s where the Packages.gz and Packages.xz files are found. These list the packages available in the repositories, but even more important: where to find them.

For example, the entry for the “adduser” package looks like this:

Package: adduser
Priority: required
Section: admin
Installed-Size: 648
Maintainer: Ubuntu Core Developers <ubuntu-devel-discuss@lists.ubuntu.com>
Original-Maintainer: Debian Adduser Developers <adduser-devel@lists.alioth.debian.org>
Architecture: all
Version: 3.113+nmu3ubuntu4
Replaces: manpages-it (<< 0.3.4-2), manpages-pl (<= 20051117-1)
Depends: perl-base (>= 5.6.0), passwd (>= 1:4.1.5.1-1.1ubuntu6), debconf | debconf-2.0
Suggests: liblocale-gettext-perl, perl-modules, ecryptfs-utils (>= 67-1)
Filename: pool/main/a/adduser/adduser_3.113+nmu3ubuntu4_all.deb
Size: 161698
MD5sum: 36f79d952ced9bde3359b63cf9cf44fb
SHA1: 6a5b8f58e33d5c9a25f79c6da80a64bf104e6268
SHA256: ca6c86cb229082cc22874ed320eac8d128cc91f086fe5687946e7d05758516a3
Description: add and remove users and groups
Multi-Arch: foreign
Homepage: http://alioth.debian.org/projects/adduser/
Description-md5: 7965b5cd83972a254552a570bcd32c93
Bugs: https://bugs.launchpad.net/ubuntu/+filebug
Origin: Ubuntu
Supported: 5y
Task: minimal

As is evident, this entry contains dependency information, but most important: It points at where the package can be downloaded from: pool/main/a/adduser/adduser_3.113+nmu3ubuntu4_all.deb in this case, which actually points to http://archive.ubuntu.com/ubuntu/pool/main/a/adduser/adduser_3.113+nmu3ubuntu4_all.deb.

Note that the URL’s base is repository’s root, and not necessarily the root of the domain. Since the Package file contains the SHA1 sum of the .deb file, its own SHA1 sum is listed in http://archive.ubuntu.com/ubuntu/dists/xenial/InRelease, which also contains a PGP signature.

The various “Contents” files (e.g. dists/xenial/Contents-amd64.gz) seem to contain a list of files, and the packages they belong to. Probably for use of apt-file.

Vivado: Random notes about the XDC constraints file

These are a few jots about constraining in Vivado. With no particular common topic, and in no particular order. Note that I have another post on similar topics.

Setting the default IOSTANDARD for all ports

In a design where all ports have the same IOSTANDARD, it’s daunting to set them for all. So if there’s just one exception, one can go

set_property IOSTANDARD LVCMOS33 [get_ports -filter { LOC =~ IOB_* } ]
set_property IOSTANDARD LVDS_25 [get_ports clk_100_p]

It’s important to do this after placement constraints of the ports, because the LOC property is set only in conjunction with setting package_pin. Filtering based upon LOC is required to avoid inclusion of MGT ports in the get_ports command, and in turn fail the set_property command altogether (yielding not only a critical warning, but none of the IOSTANDARDs will be set).

Tell the truth about your heat sink

Vivado makes an estimation of the junction temperature, based upon its power estimation. That’s the figure that you want to keep below 85°C (if you’re using a commercial temperature version of the FPGA).

With all my reservations on the accuracy of the power estimation, and hence the temperature it calculates based upon in, it makes sense to tell Vivado about the chosen cooling solution. Otherwise, it assumes a heatsink with a healthy fan above it. So if you like to live on the edge, like me, and work without a noisy fan, these two lines in the XDC file tell Vivado to adjust the effective Junction-to-Air thermal resistance (Θ_JA).

set_operating_conditions -airflow 0
set_operating_conditions -heatsink low

It’s also possible to set Θ_JA explicitly with -thetaja. Try “help set_operating_conditions” at the Tcl prompt for a list of options.

Frankly speaking, the predicted junction temperature stated on the power report is probably rubbish anyhow, even if the power estimation is accurate. The reason is that there’s a low thermal resistance towards the board: If the board remains on 25°C, the junction temperature will be lower than predicted. On the other hand, if the board heats up from adjacent components, a higher temperature will be measured. In a way, the FPGA will serve as a cooling path from the board to air. With extra power flowing through this path, the temperature rises all along it.

For example, the temperature I got with the setting above on a KC705, with the fan taken off, was significantly higher (~60°C) than Vivado’s prediction (44°C) on a design that had little uncertainty (90% of the estimated power was covered by static power and GTXs with a fixed rate — there was almost no logic in the design). The junction temperature was measured through JTAG from the Hardware Manager.

So the only thing that really counts is a junction temperature after 15 minutes or so.

Search patterns for finding elements

Pattern-matching in Vivado is slash-sensitive. e.g.

foreach x  [get_pins "*/*/a*"] { puts $x }

prints elements pins three hierarchies down beginning with “a”, but “a*” matches only pins on the top level.

The “foreach” is given here to demonstrate loops. It’s actually easier to go

join [get_pins "*/*/a*"] "\n"

To make “*” match any character “/” included, it’s possible, yet not such a good idea, to use UCF-style, e.g.

foreach x [get_pins -match_style ucf "*/fifo*"] { puts $x }

or a more relevant example

get_pins -match_style ucf */PS7_i/FCLKCLK[1]

The better way is to forget about the old UCF file format. The Vivado way to allow “*” to match any character, including a slash, is filters:

set_property LOC GTXE2_CHANNEL_X0Y8 [get_cells -hier -filter {name=~*/gt0_mygtx_i/gtxe2_i}]

Another important concept in Vivado is the “-of” flag which allows to find all nets connected to a cell, all cells connected to a net etc.

For example,

get_nets -of [get_cells -hier -filter {name=~*/gt_top_i/phy_rdy_n_int_reg}]

Group clocks instead of a lot of false paths

Unlike ISE, Vivado assumes that all clocks are “related” — if two clocks come from sources, which the tools have no reason to assume a common source for, ISE will consider all paths between the clock domains as false paths. Vivado, on the other hand, will assume that these paths are real, and probably end up with an extreme constraint, take ages in the attempt to meet timing, and then fail the timing, of course.

Even in reference designs, this is handled by issuing false paths between each pair of unrelated clocks (two false path statements for each pair, usually). This is messy, often with complicated expressions appearing twice. And a lot of issues every time a new clock is introduced.

The clean way is to group the clocks. Each group contains all clocks that are considered related. Paths inside a group are constrained. Paths between groups are false. Simple and intuitive.

set_clock_groups -asynchronous \
  -group [list \
     [get_clocks -include_generated_clocks -of_objects [get_pins -hier -filter {name=~*gt0_mygtx_i*gtxe2_i*TXOUTCLK}]] \
     [get_clocks -include_generated_clocks "gt0_txusrclk_i"]] \
  -group [get_clocks -include_generated_clocks "drpclk_in_i"] \
  -group [list \
     [get_clocks -include_generated_clocks "sys_clk"] \
     [get_clocks -include_generated_clocks -of_objects [get_pins -hier -filter {name=~*/pipe_clock/pipe_clock/mmcm_i/*}]]]

In the example above, three clock groups are declared.

As a group often consists of several clocks, each requiring a tangled expression to pin down, it may come handy to define a list of clocks, with the “list” TCL statement, as shown above.

Another thing to note is that clocks can be obtained as all clocks connected to a certain MMCM or PLL, as shown above, with -of_objects. To keep the definitions short, it’s possible to use create_generated_clock to name clocks that can be found in certain parts of the design (create_clock is applied to external pins only).

If a clock is accidentally not included in this statement, don’t worry: Vivado will assume valid paths for all clock domain crossings involving it, and it will probably take a place of honor in the timing report.

Finally, it’s often desired to tell Vivado to consider clocks that are created by an MCMM / PLL as independent. If a Clock Wizard IP was used, it boils down to something as simple as this:

set_clock_groups -asynchronous \
 -group [get_clocks -include_generated_clocks -of_objects [get_pins -hier -filter {name=~*clk_gen_ins/clk_in1}]] \
 -group [get_clocks -include_generated_clocks -of_objects [get_pins -hier -filter {name=~*clk_gen_ins/clk_out1}]]

which simply says “the input and output clocks of the clock module are independent”. This can be expanded to more outputs, of course.

Telling the tools what the BUFGCE/BUFGMUX is set to

Suppose a segment like this:

BUFGCE clkout1_buf
 (.O   (slow_clk),
 .CE  (seq_reg1[7]),
 .I   (clkout1));

To tell the tools that the timing analysis should be made with the assumption that BUFGCE is enabled,

set_case_analysis 1 [get_pins -hier -filter {name=~*/clkout1_buf/CE0}]
set_case_analysis 1 [get_pins -hier -filter {name=~*/clkout1_buf/S0}]

The truth is that it’s redundant in this case, as the tools assume that CE=1. But this is the syntax anyhow.

Constant clock? Who? Where? Why?

One of the things to verify before being happy with a design’s timing (a.k.a. “signing off timing”), according to UG906 (Design Analysis and Closure Techniques) is that there are no constant clocks nor unconstrained internal endpoints. But hey, what if there are? Like, when running “Report Timing Summary”, under “Check Timing” the number for “constant clock” is far from zero. And the timing summary says this:

2. checking constant clock
--------------------------
 There are 2574 register/latch pins with constant_clock. (MEDIUM)

3. checking pulse_width_clock
-----------------------------
 There are 0 register/latch pins which need pulse_width check

4. checking unconstrained_internal_endpoints
--------------------------------------------
 There are 0 pins that are not constrained for maximum delay.

 There are 5824 pins that are not constrained for maximum delay due to constant clock. (MEDIUM)

Ehm. So which clock caused this, and what are the endpoints involved? It’s actually simple to get that information. Just go

check_timing -verbose -file my_timing_report.txt

on the Tcl prompt, and read the file. The registers and endpoints are listed in the output file.

Floorplanning (placement constraints for logic)

The name of the game is Pblocks. Didn’t dive much into the semantics, but used the GUI’s Tools > Floorplanning menus to create a Pblock and auto-place it. Then saved the constraints, and manipulated the Tcl commands manually (i.e. the get_cells command and the choice of slices).

create_pblock pblock_registers_ins
add_cells_to_pblock [get_pblocks pblock_registers_ins] [get_cells -quiet -hierarchical -filter { NAME =~  "registers_ins/*" && PRIMITIVE_TYPE =~ FLOP_LATCH.*.* && NAME !~  "registers_ins/fifo_*" }]
resize_pblock [get_pblocks pblock_registers_ins] -add {SLICE_X0Y0:SLICE_X151Y99}

The snippet above places all flip-flops (that is, registers) of a certain module, except for those belonging to a couple of submodules (excluded by the second NAME filter) to the bottom area of a 7V330T. The constraint is non-exclusive (other logic is allowed in the region as well).

The desired slice region was found by hovering with the mouse over a zoomed in chip view of an implemented design.

The tools obeyed this constraint strictly, even with post-route optimization, so it’s important not to shoot yourself in the foot when using this for timing improvement (in my case it worked).

To see how the logic is spread out, use the “highlight leaf cells” option when right-clicking a hierarchy in the netlist view to the left of a chip view. Or even better, use Tcl commands on the console:

unhighlight_objects
highlight_objects -color red [get_cells -hierarchical -filter { NAME =~  "registers_ins/*" && PRIMITIVE_TYPE =~ FLOP_LATCH.*.* && NAME !~  "registers_ins/fifo_*" }]

The first command removes existing highlight. There’s an -rgb flag too for selecting the exact color.

There’s also show_objects -name mysearch [ get_cells ... ] which is how the GUI’s “find” operation creates those lists in GUI to inspect elements.

System File Checker: The savior for Windows 7 and 8

When a Windows 7 or Windows 8 starts to behave weirdly, this is the general-purpose command that can save your day (in the Command Prompt):

sfc /scannow

It scans all system files and fixes whatever looks bad. In my case, it started off as a “Limited” Wireless connection on a laptop (after it had been fine for a year), which turned out to be the lack of a DHCP request, and ended up with the understanding the the DHCP request service can’t be started because some “element” was missing. Now go fix that manually.

The scan took some 30 minutes, but after the reboot, all was fine again.

For more of my war stories, click here.