I just bought a DigiPro 5″/4″ drawing tablet to run with my Fedora 12. When plugging it in, the system recognized it, but every time I touched the tablet with the stylus pen, the cursor went to the upper left corner. Clicks worked OK, but it looked like the system needed to know the tablet’s dimensions.
To the system, this tablet is UC-LOGIC Tablet WP5540U. How do I know? Because when I plug it in, /var/log/messages gives:
Jun 29 19:49:06 big kernel: usb 6-1: new low speed USB device using uhci_hcd and address 5
Jun 29 19:49:06 big kernel: usb 6-1: New USB device found, idVendor=5543, idProduct=0004
Jun 29 19:49:06 big kernel: usb 6-1: New USB device strings: Mfr=1, Product=2, SerialNumber=0
Jun 29 19:49:06 big kernel: usb 6-1: Product: Tablet WP5540U
Jun 29 19:49:06 big kernel: usb 6-1: Manufacturer: UC-LOGIC
Jun 29 19:49:06 big kernel: usb 6-1: configuration #1 chosen from 1 choice
Jun 29 19:49:06 big kernel: input: UC-LOGIC Tablet WP5540U as /devices/pci0000:00/0000:00:1d.0/usb6/6-1/6-1:1.0/input/input9
Jun 29 19:49:06 big kernel: generic-usb 0003:5543:0004.0005: input,hidraw1: USB HID v1.00 Mouse [UC-LOGIC Tablet WP5540U] on usb-0000:00:1d.0-1/input0
To get it going, I followed the solution in Fedoraunity (which requires registration to access, would you believe that!)
First, I downloaded the wizardpen RPM package from here.
And installed it:
# rpm -i wizardpen-0.7.0-0.fc12.x86_64.rpm
And then ran the calibration utility. To some this goes with /dev/input/event8, just play with the numbers until hitting gold:
# wizardpen-calibrate /dev/input/event6
Please, press the stilus at ANY
corner of your desired working area: ok, got 1928,3766
Please, press the stilus at OPPOSITE
corner of your desired working area: ok, got 30360,28914
According to your input you may put following
lines into your XF86Config file:
Driver "wizardpen"
Option "Device" "/dev/input/event6"
Option "TopX" "1928"
Option "TopY" "3766"
Option "BottomX" "30360"
Option "BottomY" "28914"
Option "MaxX" "30360"
Option "MaxY" "28914"
Now, one of the side effects of installing the wizardpen RPM package was that it created a file: /etc/hal/fdi/policy/99-x11-wizardpen.fdi which is a HAL fdi file. If you’ve edited an xorg.conf file, that should be old history. Instead of the mumbo-jumbo above, there is a new mumbo-jumbo, which is supposed to work even if the device is hotplugged. No need to restart X for new devices! Hurray!
So I downloaded the recommended XML file from here, modified the variables according to my own calibration, and saved the following as /etc/hal/fdi/policy/99-wizardpen.fdi (and trashed the previous file. The names are slightly different, who cares).
<?xml version="1.0" encoding="ISO-8859-1"?>
<deviceinfo version="0.2">
<device>
<!-- This MUST match with the name of your tablet -->
<match key="info.product" contains="UC-LOGIC Tablet WP5540U">
<merge key="input.x11_driver" type="string">wizardpen</merge>
<merge key="input.x11_options.SendCoreEvents" type="string">true</merge>
<merge key="input.x11_options.TopX" type="string">1928</merge>
<merge key="input.x11_options.TopY" type="string">3766</merge>
<merge key="input.x11_identifier" type="string">stylus</merge>
<merge key="input.x11_options.BottomX" type="string">30360</merge>
<merge key="input.x11_options.BottomY" type="string">28914</merge>
<merge key="input.x11_options.MaxX" type="string">30360</merge>
<merge key="input.x11_options.MaxY" type="string">28914</merge>
</match>
</device>
</deviceinfo>
According to the reference mentioned above, there’s a need to relogin. It turns out that replugging the tablet to the USB jack is enough to get it up and running OK.
To make GIMP respond to pressure sensitivity, set up the input device as follows: Edit > Preferences > Input Devices, press Configure Extended Input Devices. Under Devices find the tablet’s name, and choose the mode to Screen.
This is a short note about how to get rid of cellulitis on natural skin, using GIMP 2.6 (will most likely work on earlier versions as well).
The truth is that I don’t really understand why this works, but it fixed a nasty case of ugly skin texture in a low key photo. The trick was using the hard light layer mode, which is described in detail in the GIMP documentation. Unfortunately, the explanations and equations didn’t help me much in understanding why it happened as it happened.
So here’s the procedure, as I did it. If it doesn’t work for you, don’t blame me. I have no idea what I actually did.
Original image:
Original image
Duplicate the layer, and blur the upper layer strongly (Gaussinan blur, radius 40 in our case)
Stage two: Image blurred
Set upper layer’s mode as “Hard light”
Stage 3: Hard light applied
Merge down the upper layer, so they become one layer, and reduce the saturation:
Final result
This may not look like a significant change, but when zooming out, it is.
One of the nice things about upgrading software, is not only that there are a lot of new, confusing and useless features, but also that things that used to work in the past don’t anymore. At best, features which one used a lot have completely disappeared. Upgrading to Fedora 12, with its GIMP 2.6, was no exception.
It looks like the GIMP developers found the Color Range Mapping plugin useless, as is apparent from their correspondence. As Sven Neumann says over there, “Let’s remove those plug-ins then if the code sucks that much. Are they in anyway useful at all?”
Let me answer you, Sven. Yes, it’s very very useful. I don’t know if Photoshop has a similar feature, but color range mapping is extremely useful in photomontage. That’s when you need one exact color at one place, and another exact color at another.
When the absence of the relevant plugin was reported in a bug report, it was said that “Looking at it again, the plug-in really was so badly broken, I would prefer if we would not have to add it back.” Broken or not, I love this plugin.
To fix my own problem, I followed this post and fetched myself an x86-64 rpm of an old version of GIMP. Matching architecture is important, because the plugins are precompiled binaries.
I downloaded just the first RPM I could find of GIMP, which was of version 2.4 and compiled for x86_64, and then extracted the files in an empty directory with
rpm2cpio gimp-2.4.6-1.fc7.x86_64.rpm | cpio -idvm
And then, as root:
cp usr/lib64/gimp/2.0/plug-ins/mapcolor /usr/lib64/gimp/2.0/plug-ins/
and that’s all! Restarting GIMP I found my beloved plugin there. Happy, happy, joy, joy!
Update 14.7.21: In Gimp 2.10, there’s a separate directory for each plug-in, but all in all, it’s the same:
# cd /usr/lib/gimp/2.0/plug-ins/
# mkdir mapcolor
# cp /path/to/old/gimp/2.0/plug-ins/mapcolor mapcolor/
Having a pretty large state machine, I wanted the states enumerated automatically. Or at least not do the counting by hand. I mean, doing it once is maybe bearable, but what if I’ll want to insert a new state in the future?
So what I was after is something like
module main_state #(parameter
ST_start = 0,
ST_longwait1 = 1,
ST_longwait2 = 2,
ST_synthesizer = 3,
ST_synth_dwell = 4)
to support a state machine like
case (state)
ST_start:
begin
state <= ST_longwait1;
reset <= 1;
end
ST_longwait1:
begin
(...)
end
and so on. Only with more than 20 states.
The solution is short and elegant. At command prompt (DOS window for unfortunate Windows users), this simple Perl one-liner does the job.
perl -ne 'print "$1 = ".$i++.",\n" if /^[ \t]*(\w+)[ \t]*:/;' < main_state.v
where main_state.v is the Verilog module’s file, of course. The script looks for anything textual which starts a line and is followed by a ‘:’. This is not bulletproof, but is likely to work. The output should be a list of state number assignments, which you can copy-paste into the code.
So this script will work only if there’s a single case statement in the input, which is the Verilog module itself. If there are several, just copy the relevant case statement into a separate file, and put that file’s name instead of main_state.v in the example above.
If you happen to be impressed by the magic, then you should probably take some time to play around with Perl. Every minute spent on that will be regained later. Believe me.
And if you don’t have Perl on your computer, that surely means you have a bad taste for operation systems. There are two possible ways to come around this:
- If you have Xilinx ISE installed on your computer, try writing “xilperl” instead of “perl” above. xilperl is just a normal Perl interpreter, installed along with the Xilinx tools.
- Download Perl for your operating system. It’s free (as in freedom) software, so there is no reason why you should pay a penny for this. There are many sources for Perl. I gave the link to ActivePerl, which is the one I happen to know about.
The problem is relatively simple: Sometimes I take images that are deliberately underexposed, or such that have important parts in the dark areas. This is then fixed with GIMP. But in order to choose which image to play with, I need to those details visible in some test image, so I can browse them with an image viewer. Playing with each shot manually is out of the question.
My original thought was to use GIMP in a script, as I’ve shown in the past and feed GIMP with some LISP commands so it resizes the image and runs a “Curves” command.
But then I thought it would be much easier with the “convert” utility. So here’s a short script, which downsizes the image by 4 and gives some visible dynamic range. If you want to use this, I warmly suggest to read the ImageMagick manual page, since the values given below were right for one specific set of shots. You’ll need to tweak with them a bit to get it right for you.
The script generates copies of the originals, of course…
#!/bin/bash
for i in IMG_* ; do
echo $i;
convert $i -resize 25%x25% -level 0,1.0,16300 -gamma 2.0 view_$i ;
done
This is how to solve a special case, when a PDF file is given, but I want to add my remarks in some free space.
The trick is to write the remarks into another single-page pdf file, so that the new text occupies the blank area in the original. In my case, I needed the remark on the second page, so I used the pdftk command-line utility so split the pages into two files, kind-of watermark the second page with my own pdf file and then rejoin them.
The pdftk is free software, and can be downloaded for various platforms here. If you have a fairly sane Linux distribution, you should be able to just grab a package with it (“yum install pdftk” or something).
Surprisingly enough, this was the most elegant solution I could come up with. This is the little bash script I wrote:
#!/bin/bash
tmpfile=tmp-delme-$$
# Split two pages into two files:
pdftk original.pdf cat 1 output $tmpfile-page1.pdf
pdftk original.pdf cat 2 output $tmpfile-page2.pdf
# Add footnote as if it's a watermark
pdftk $tmpfile-page2.pdf stamp footnote.pdf output $tmpfile-marked.pdf
# Recombine the two pages again
pdftk $tmpfile-page1.pdf $tmpfile-marked.pdf cat output original-marked.pdf
# Clean up
rm -f $tmpfile-page1.pdf $tmpfile-page2.pdf $tmpfile-marked.pdf
A short note, since it’s so simple and so important. When Firefox gets painfully slow, just compact its Sqlite databases. As has been pointed out elsewhere, the quick fix is to close Firefox, go to where it holds its files, find the .sqlite files, and go (bash under Cygwin, in my case):
$ for i in *.sqlite; do echo "VACUUM;" | sqlite3 $i ; done
And it helps a lot. It’s not just the files getting smaller. It’s everything getting faster.
The sqlite binary for Windows can be found here.
There is a Firefox plugin for this and a standalone application, but I like it in good-old command line with my full control on what’s happening.
The problem: In LaTeX, if I import an EPS file with \includegraphics and rotate it by 90 degrees, hell breaks lose in the resulting pdf file.
My processing chain, in case you wonder, is latex, dvips and ps2pdf. I avoid pdflatex since it won’t import EPS (as far as I can recall) but only images converted to pdf. Or something. It was a long time ago.
The figure is generated with
\begin{figure}[!btp]\begin{center}
\includegraphics[width=0.9\textheight,angle=-90]{blockdiagram.eps}
\caption{Modulator's block diagram}\label{blockdiagram}
\end{center}\end{figure}
which successfully rotates the figure as requested, but unfortunately this also causes the page appear in landscape format in Acrobat. While this is slightly annoying, the real problem is that the file will or won’t print properly, depending on the certain computer you print from, and possibly on the weather as well.
The curious thing about this is that if I choose angle=-89.99 it turns out like I want to, but I have a feeling that this will not end well.
Using \rotatebox instead didn’t work either:
\rotatebox{-90}{\includegraphics[width=0.9\textheight]{blockdiagram.eps}}
It looks like this does exactly the same (and the -89.99 trick also works). Now, it’s pretty evident that the clean 90 degrees value triggers off some hidden mechanism which tries to be helpful, but instead ends up messing up things. So this is how I solved this, eventually:
\begin{figure}[!btp]\begin{center}
\rotatebox{-1}{\includegraphics[width=0.9\textheight,angle=-89]{blockdiagram.eps}}
\caption{Modulator's block diagram}\label{blockdiagram}
\end{center}\end{figure}
In words: Rotate the EPS by 89 degrees, and then by another degree, so we have exactly 90 degrees. This leaves some room for precision errors, if the rotation involves actual calculations of coordinations (I have no idea if this is the case), but this is as close to 90 degrees as I managed to get, without having the page messed up.
Not an ideal solution. If you know how to do this better, please comment below!
Ah, I should mention that it’s possible to rotate the EPS file first, and then import it into the LaTeX doc as is. If the whole generation runs with makefile anyhow, this shouldn’t be too annoying. But it turns out (see here) that it’s not all that simple to rotate an EPS. I haven’t tried that solution, but it looks like it should work on any Linux machine. Anyhow, I didn’t feel like playing with bounding boxes.
IMPORTANT: This post may very well be useless mumbo-jumbo of how to do something that should never be done anyhow. I post it only because I had it all written down neatly when I discovered that I could have skipped it all.
From what I’ve seen, every you’ll need is in the x86_64 repository, including relevant i686 RPMs for 32-bit compilation.
Why I messed up in the first place
The whole saga started when the compiler complained about not having the stubs-32.h file. So I went:
$ yum provides '*/stubs-32.h'
Loaded plugins: presto, refresh-packagekit
glibc-devel-2.11-2.i686 : Object files for development using standard C
: libraries.
Repo : fedora
Matched from:
Filename : /usr/include/gnu/stubs-32.h
<...same package from other sources...>
OK, So let’s install it, shall we?
# yum install glibc-devel-2.11-2.i686
Loaded plugins: presto, refresh-packagekit
Setting up Install Process
Resolving Dependencies
--> Running transaction check
---> Package glibc-devel.i686 0:2.11-2 set to be updated
--> Processing Dependency: glibc = 2.11-2 for package: glibc-devel-2.11-2.i686
--> Processing Dependency: glibc-headers = 2.11-2 for package: glibc-devel-2.11-2.i686
--> Processing Dependency: libcidn.so.1 for package: glibc-devel-2.11-2.i686
<...snipped some very similar lines...>
--> Processing Dependency: libnss_dns.so.2 for package: glibc-devel-2.11-2.i686
--> Processing Dependency: libBrokenLocale.so.1 for package: glibc-devel-2.11-2.i686
--> Processing Dependency: libnss_compat.so.2 for package: glibc-devel-2.11-2.i686
--> Running transaction check
---> Package glibc.i686 0:2.11-2 set to be updated
--> Processing Dependency: glibc-common = 2.11-2 for package: glibc-2.11-2.i686
--> Processing Dependency: libfreebl3.so for package: glibc-2.11-2.i686
--> Processing Dependency: libfreebl3.so(NSSRAWHASH_3.12.3) for package: glibc-2.11-2.i686
---> Package glibc-devel.i686 0:2.11-2 set to be updated
--> Processing Dependency: glibc-headers = 2.11-2 for package: glibc-devel-2.11-2.i686
--> Running transaction check
---> Package glibc.i686 0:2.11-2 set to be updated
--> Processing Dependency: glibc-common = 2.11-2 for package: glibc-2.11-2.i686
---> Package glibc-devel.i686 0:2.11-2 set to be updated
--> Processing Dependency: glibc-headers = 2.11-2 for package: glibc-devel-2.11-2.i686
---> Package nss-softokn-freebl.i686 0:3.12.4-10.fc12 set to be updated
--> Finished Dependency Resolution
glibc-2.11-2.i686 from fedora has depsolving problems
--> Missing Dependency: glibc-common = 2.11-2 is needed by package glibc-2.11-2.i686 (fedora)
glibc-devel-2.11-2.i686 from fedora has depsolving problems
--> Missing Dependency: glibc-headers = 2.11-2 is needed by package glibc-devel-2.11-2.i686 (fedora)
Error: Missing Dependency: glibc-headers = 2.11-2 is needed by package glibc-devel-2.11-2.i686 (fedora)
Error: Missing Dependency: glibc-common = 2.11-2 is needed by package glibc-2.11-2.i686 (fedora)
You could try using --skip-broken to work around the problem
You could try running: package-cleanup --problems
package-cleanup --dupes
rpm -Va --nofiles --nodigest
or, put simply, forget about it, I can’t find the foundation package for 32-bit machines. I tried to enable the updates-testing repository with the –enablerepo=u*g flag to yum, as recommended in an encouraging thread, but it did no good.
Instead, I followed another forum thread, which recommended to brutally remove the existing i686 version, and reinstall the new one:
# rpm --nodeps -e glibc-2.11.1-1.i686
# yum install glibc-devel-2.11-2.i686
But that didn’t work, because “glibc-common = 2.11-2 is needed by package glibc-2.11-2.i686 (fedora)”.
But then it stroke me, that if the problem is the -2 suffix, why don’t I go for the -1 version? I mean, I got the -2 version from “yum whatprovides” on the header file, which probably answered with the latest version, for which there are dependency holes in the repository.
# yum install glibc-devel-2.11.1-1.i686
So the lesson is not to trust “yum whatprovides” blindly. At least not with exact version numbers. If the repository has broken dependencies, sometimes just go back a bit.
The reason I added the i686 repository was that I thought I’ll find the necessary RPMs there. I was wrong on that.
… and here’s how to do it
The idea is to create a new repository configuration file, which declares the i686 repository. These configuration files don’t point at the repository, but rather at an URL, from which a list of mirrors to repositories is listed.
# cd /etc/yum.repos.d
# cp fedora.repo fedora-i686.repo
# replace \$basearch i386 -- fedora-i686.repo
I then edited fedora-i686.repo so that the names in the [ ] headers don’t override those in fedora.repo by adding the i686 string here and there.
That’s it.
Note that the architecture was changed to i386 (with a THREE) and not i686, because when yum fetches for the mirror repository list, it asks for i386 as an architecture. If the address to the mirror list (as defined in the *.repo file with the variable mirrorlist).
$ yum repolist
Loaded plugins: presto, refresh-packagekit
fedora/metalink | 28 kB 00:00
fedora | 4.2 kB 00:00
fedora/primary_db | 12 MB 00:19
fedora686/metalink | 13 kB 00:00
Could not parse metalink https://mirrors.fedoraproject.org/metalink?repo=fedora-12&arch=i686 error was
File /var/tmp/yum-eli-X4M9Cd/x86_64/12/fedora686/metalink.xml.tmp is not XML
The file is not an XML, because it contains complaints and not data.
The problem
Sometimes software packages require setting some environment variables for its proper execution. When these variables clearly have no effect on any other applications in the system, that’s fine. When they want to manipulate some sensitive variables, which other applications may depend on, that’s a whole different story.
When it’s a single executable, the problem is fairly simple. When it’s gazillions of them, all requiring the same set of environment variables, it’s not so fun.
I solved this by writing one single wrapper for all executables, and a lot of symbolic links. This wrapper sets the environment variables for the relevant application, and then runs the desired executable. The path is set to run the wrapper, rather than the executable, so this is completely transparent. In this way, the new software sees the correct environment variables but without polluting them for the entire system.
Don’t play with my library path
I’ve just installed Xilinx ISE 9.2 on my Fedora 12 Linux machine. One of things I was required to do, was to add this snippet (more or less) to my .bashrc file:
if [ -n "$LD_LIBRARY_PATH" ]
then
LD_LIBRARY_PATH=${XILINX}/bin/${PLATFORM}:${XILINX}/X11R6/bin/lin64:/usr/X11R
6/lib:${LD_LIBRARY_PATH}
else
LD_LIBRARY_PATH=${XILINX}/bin/${PLATFORM}:${XILINX}/X11R6/bin/lin64:/usr/X11R6/lib
fi
That means that every Linux application from now on should look in Xilinx’ libraries before attempting to go for the ones Fedora supplies. But why first? Because Xilinx seems to override some standard libraries. Which is good for its own application, but can be pretty disastrous for all others. It means, for example, that removing or upgrading ISE may cause other things in your system break.
Why Xilinx chose this approach, I can only guess. It was most likely somewhere between “we can’t get it to work otherwise” and “you’re not using the computer for anything else, are you?”
My solution was to move these problematic lines to a wrapper script for each executable. If Xilinx wants these libraries for its own executables, so be it. Don’t pollute the whole system.
Setting up the path
Xilinx wanted me to add a lot of mumbo-jumbo into the .bashrc. Most went to the wrapper script (shown below). The only thing I put in .bashrc was appending a directory to the path. Xilinx wanted me to put ${XILINX}/bin/${PLATFORM}, but I went for ${XILINX}/bin-wrappers/${PLATFORM}
So this was added to .bashrc:
if [ -n "$PATH" ]
then
PATH=${XILINX}/bin-wrappers/${PLATFORM}:${PATH}
else
PATH=${XILINX}/bin-wrappers/${PLATFORM}
fi
export PATH
The wrapper
Now I created a the ${XILINX}/bin-wrappers directory, a lin64 directory underneath. In lin64, the a file named xilinx-app-wrapper is executable and looks like this:
#!/bin/bash
# First setup variables
PLATFORM=lin64
if [ -n "$LD_LIBRARY_PATH" ]
then
LD_LIBRARY_PATH=${XILINX}/bin/${PLATFORM}:${XILINX}/X11R6/bin/lin64:/usr/X11R6/lib:${LD_LIBRARY_PATH}
else
LD_LIBRARY_PATH=${XILINX}/bin/${PLATFORM}:${XILINX}/X11R6/bin/lin64:/usr/X11R6/lib
fi
export LD_LIBRARY_PATH
if [ -n "$NPX_PLUGIN_PATH" ]
then
NPX_PLUGIN_PATH=${XILINX}/java/${PLATFORM}/jre/plugin/i386/ns4:${NPX_PLUGIN_PATH}
else
NPX_PLUGIN_PATH=${XILINX}/java/${PLATFORM}/jre/plugin/i386/ns4
fi
export NPX_PLUGIN_PATH
if [ -n "$LMC_HOME" ]
then
LMC_HOME=${XILINX}/smartmodel/${PLATFORM}/installed_${PLATFORM}:${LMC_HOME}
else
LMC_HOME=${XILINX}/smartmodel/${PLATFORM}/installed_${PLATFORM}
fi
export LMC_HOME
# Now call the real executable. Putting the double quotes around $@
# tells bash not to break arguments with white spaces, so this is completely
# transparent.
exec ${XILINX}/bin/lin64/${0##*/} "$@"
It’s pretty simple until we reach the bottom line: The script is copied from Xilinx’ own example file, as they requested to be put in .bashrc. So before getting to the bottom, it’s just plain environment setting.
Now to the last line: I chose to run the Xilinx application with the bash-builtin exec function. This makes Xilinx’ application replace the bash script, so we have one process running (and one process to kill if necessary) and the return value issue handled neatly.
This exec transparently runs the Xilinx application, which depends on the command used to call the wrapper. Details:
We have this ${XILINX}/bin/lin64/${0##*/} expression. The ${0##*/} thing means $0 with anything coming before a slash, including the slash chopped off. Since $0 contains the application’s name as it was called, ${0##*/} is the application name without the path. So the path is set absolutely, and the application’s name is taken from $0. Now we have the exact path to the corresponding Xilinx application.
So this wrapper is a single script which can wrap all Xilinx executables. All we have to do is to symlink to the wrapper with the same names as the Xilinx applications.
Finally, we have the “$@” thing. That means all arguments with which the wrapper was called. Without the double quotes, possible spaces in the arguments would break them up.
Note that this works with any array, so
a[0]="Hello there";
a[1]="One argument";
exec ./test "${a[@]}"
will send the ./test script only two arguments (double quotes not passed to application)
Symbolic links
The idea is now to create a symbolic link for each executable in the bin directory, all pointing at xilinx-app-wrapper. This makes sense, since this script detects by which command it was called, and will exec the correct Xilinx application in turn.
The only problem is that Xilinx’ bin directory also includes several library files, which shouldn’t be executable. To overcome this I wrote a small script, which I used to create the symbolic links (and removed it afterwards):
#!/bin/bash
TARGETDIR=${XILINX}/bin-wrappers/lin64
for i in * ;
do if file $i | grep -iq executable ; then
( cd $TARGETDIR && ln -s xilinx-app-wrapper $i ; )
fi ;
done
I ran the script from ${XILINX}/bin/lin64, and its principle is simple: The “file” application is called on each file in that directory. If the word “executable” appears in the definition, it earns a symbolic link in the bin-wrappers directory (to the script, of course, and not to the Xilinx application).
So the result of this is:
$ cd ${XILINX}/bin-wrappers/lin64
$ ls -l
[...skipped a lot of lines...]
lrwxrwxrwx. 1 root root 18 2010-01-20 23:53 xilgrep -> xilinx-app-wrapper*
lrwxrwxrwx. 1 root root 18 2010-01-20 23:53 xilhelp -> xilinx-app-wrapper*
-rwxr-xr-x. 1 root root 925 2010-01-21 00:04 xilinx-app-wrapper*
lrwxrwxrwx. 1 root root 18 2010-01-20 23:53 xilinxd -> xilinx-app-wrapper*
lrwxrwxrwx. 1 root root 18 2010-01-20 23:53 xilperl -> xilinx-app-wrapper*
lrwxrwxrwx. 1 root root 18 2010-01-20 23:53 xinfo -> xilinx-app-wrapper*
lrwxrwxrwx. 1 root root 18 2010-01-20 23:53 _xinfo -> xilinx-app-wrapper*
lrwxrwxrwx. 1 root root 18 2010-01-20 23:53 xinfoenv -> xilinx-app-wrapper*
lrwxrwxrwx. 1 root root 18 2010-01-20 23:53 xlicmgr -> xilinx-app-wrapper*
lrwxrwxrwx. 1 root root 18 2010-01-20 23:53 xplorer.pl -> xilinx-app-wrapper*
lrwxrwxrwx. 1 root root 18 2010-01-20 23:53 xplorer.tcl -> xilinx-app-wrapper*
lrwxrwxrwx. 1 root root 18 2010-01-20 23:53 xpower -> xilinx-app-wrapper*
lrwxrwxrwx. 1 root root 18 2010-01-20 23:53 xpwr -> xilinx-app-wrapper*
lrwxrwxrwx. 1 root root 18 2010-01-20 23:53 xreport -> xilinx-app-wrapper*
lrwxrwxrwx. 1 root root 18 2010-01-20 23:53 XSLTProcess -> xilinx-app-wrapper*
lrwxrwxrwx. 1 root root 18 2010-01-20 23:53 xst -> xilinx-app-wrapper*
lrwxrwxrwx. 1 root root 18 2010-01-20 23:53 xtclsh -> xilinx-app-wrapper*
lrwxrwxrwx. 1 root root 18 2010-01-20 23:53 xusbdfwu -> xilinx-app-wrapper*
lrwxrwxrwx. 1 root root 18 2010-01-20 23:53 xusb_emb -> xilinx-app-wrapper*
lrwxrwxrwx. 1 root root 18 2010-01-20 23:53 xusb_xlp -> xilinx-app-wrapper*
lrwxrwxrwx. 1 root root 18 2010-01-20 23:53 xusb_xpr -> xilinx-app-wrapper*
lrwxrwxrwx. 1 root root 18 2010-01-20 23:53 xusb_xup -> xilinx-app-wrapper*
lrwxrwxrwx. 1 root root 18 2010-01-20 23:53 zip -> xilinx-app-wrapper*
Note that among all symlinks, we have xilinx-app-wrapper itself, which is the only thing that actually runs in this directory, and hence the only thing which needs changing when a Xilinx-global change in the environment is necessary.
That’s it. At this point everything ticked like a clockwork.
Barely relevant stuff
Since I didn’t reach the solution above right away, I tried a few other things. It’s a shame to throw them away just like that.
First, let’s see the “test” script mentioned above:
#!/bin/bash
while [ -n "$1" ]
do
echo Argument: $1
shift
done
It’s a simple script which shows which arguments were given to it by scanning them one by one. If an argument was broken because of spaces, here’s how I saw it.
And now an alternative (and much more cumbersome) way pass arguments transparently:
args="";
while [ -n "$1" ]; do
# Append the new argument within quotes, where possible existing
# double-quotes are converted to \"
args+="\"${1/\"/\\\"}\""
shift
[ -n "$1" ] && args+=" ";
done
bash <<END
./test $args
END
The trick about this script is to create an empty variable $args, and append each incoming argument surrounded by double quotes and a white space. If I wanted to make this simple, I would go
args+="\"$1\" "
somewhere in the middle, but hey, I don’t want a white space after the last argument. Besides, what happens if the argument itself contains a double quote? Solution: Replace each double quote (“) with an escaped one (\”). That’s what the terrible expression in the curly brackets stand for. It’s basically ${variable/search-pattern/replace-with} plus the fact that both double quotes and backslashes have to be escaped with a backslash. Looks a bit like Perl on a bad day.
And except for begin horrible, it has another major disadvantage: It creates another process. I couldn’t make an exec call by this method. So I feed a bash shell with the data through standard input, which isn’t very cute. But if one argument is END, it still works, by the way. So if each argument needs some manipulation, and can’t be passed through with “$@”, the latter method will do the job.