The SPF, DKIM and DMARC trio: Making your email appear decent


Whether you just want your non-Gmail personal email to get through, or you have a website that produces transactional emails (those sent by your site or web app), there’s a long fight with spam filters ahead.

The war against unsolicited emails will probably go on as long as email is used, and it’s an ongoing battle where one leak is sealed and another is found. Those responsible for mail delivery constantly tweak their spam detectors’ parameters to minimize complaints. There are no general rules for what is detected as spam and what isn’t. What passes Gmail’s tests may very well fall on Security Industries’ mail server and vice versa. Each have their own experience.

But you want your message to reach them all. Always.

This is a guide to the main ideas and concepts behind the trio of mechanisms mentioned in the title. The purpose is to focus on the delicate and sometimes crucial details that are often missed in howtos everywhere. And also try to understand the rationale behind each mechanism, even though it might not be relevant when spam detector X is tuned to achieve the best results, given a current flow of spam mails with a certain pattern.

Howtos usually tell you to employ a DKIM signing software on the mail server, and make SPF and DKIM DNS records for “your domain”. Which one? Not necessarily trivial, as discussed below. And then possibly add a DMARC record as well. Will it really help? Also discussed.

Here’s the thing: Employing these elements will most likely do something good, even if you get it wrong. Setting up things without understanding what you’re doing can solve an immediate problem. This post focuses on understanding the machinery, so the best possible setting can be achieved.

Get your tie knot right.

Rationale: Domains cost money

There are different ideas behind each of the trio’s mechanisms, but there’s one solid idea behind them all: The reputation of a domain name.

If you’re a spammer, you can’t send thousands of emails that are linked to a domain name without wrecking its reputation rather quickly. So let’s make sure each domain name’s owner stands behind the mails sent on its behalf, and maintains its reputation. This requires a way to tell whether this owner really sent each mail, and not just a spammer abusing it. SPF and DKIM supply these mechanisms.

The cost of domain names makes it unworthy to purchase domains just for the sake of a few thousand mails until its reputation is dead meat. Well, sort of. There are .bid domains at $1.75 today. But .com and .org are still rather expensive.

DMARC takes this one step further, and allows a domain name owner to prevent the delivery of emails that weren’t sent on its behalf. It also puts the focus on the the sender given in the”From:” header, instead of other domains, which SPF and DKIM might relate to. This makes the junk domain concept even less eligible.

Despite all said above, I still get spam messages (of the random recipient type) with this trio perfectly set up. But they’re relatively rare.

The trio in short

These three techniques are fundamentally different in what they do. In brief for now, in more detail further below:

  • SPF: Defines the set of server IP addresses that are authorized to use a domain name to identify itself (HELO/EHLO) and/or the mail’s sender (MAIL FROM) in the SMTP exchange. Note that this doesn’t directly relate to the “From:” mail header, even though it does in many practical cases.
  • DKIM: A method to publish a public key in a DNS record for the digital signature of some parts of an email message, so this signature can be verified by any recipient. The domain name of this DNS record, which is given explicitly in the signature, doesn’t need to have any relation to the mail’s author, sender or any relaying server involved (even though it usually has). It’s just a placeholder for the accumulating reputation of mails that are signed with it.
  • DMARC: A mechanism to prevent the domain name from being abused by spammers. It basically tells the recipient than an email with a certain Author Domain (as it appears in “From:”) should pass an SPF and/or DKIM test, and what to do if not.

In essence, SPF authenticates the use of some mail relay servers, DKIM authenticates the message carrying its signature, and DMARC says what to do if the authentication(s) fail.

The DNS records

All three techniques rely on a DNS lookups for a TXT entry, which has the domain name included (let’s say we have

  • SPF records are found as a TXT record for the domain itself (that is,
  • DKIM records are the TXT records for the “selector._domainkey” subdomain, where “selector” is given in the mail message’s DKIM header. So it’s like (for selector=default).
  • DMARC record are the TXT entry for the “_dmarc” subdomain (i.e.

So it’s crucial which domain it is that the spam filter software considers to be “the domain”. Spoiler: DKIM and DMARC have this sorted out nicely. It’s SPF that is tricky.

Note that given an email message, the recipient can easily check whether it has SPF and DMARC records, but (without DMARC) it can’t know if there’s a relevant DKIM record available, because of the selector part. Consequently, adding a DKIM record and signing only part of the emails won’t backlash on those that aren’t signed.

Which domain is “the domain”?

Quite often, guides in these topics just say “the domain”, making it sound as if there’s only one domain involved. In fact, there are several to be aware of.

Let’s say that sends a mail by connecting to its ISP’s mail server, which in turn relays it to the destination mail server. We then have four different domains involved.

  1. The domain of the author, appearing in the From header, shown to the human recipient as the sender. in this case.
  2. The “envelope sender”, appearing in the MAIL FROM part of the SMTP conversation of the relay transmission. This could be (the simple approach), but also something like This is because the envelope sender is the bounce address, and some mail relays make up some kind of bogus bounce address so they can track the bouncing mails.
  3. The domain used in the HELO/EHLO part of the SMTP conversation of the relay transmission. Probably something like, as the ISP has many servers for relaying out.
  4. The rDNS domain entry of the IP address of the sender on the relay transmission. If this entry doesn’t exist, or isn’t exactly as the HELO/EHLO domain, hang the postmaster. Some mail servers won’t even talk with you unless they match.

I use the term “relay transmission” for the connection between two mail servers: Going from the server that accepted your message for transmission when you pressed “Send” to the server that holds the mail account of the mail’s recipient (i.e. destination of the MX record of the recipient’s full domain).

But oops. Mails are often relayed more than once before reaching their final station. Except for the first item in the list above, the domains are different on each such transmission. Which one counts? When does it count?

Luckily, this dilemma is pretty much limited to SPF. And with DMARC, it’s nonexistent.


At times, people just add an SPF record for their mail address’ domain with their relay servers’ IP range, and think they’ve covered themselves SPF-wise. Sometimes they did, and sometimes they didn’t. No escape from the gory details.

If you’re not familiar with the HELO/EHLO and MAIL FROM: SMTP tokens, I warmly suggest taking a quick look on another post of mine. It’s nearly impossible to understand SPF without it.

The SPF mechanism is quite simple: The server that receives the email looks up the TXT DNS record(s) for the domain name given in the envelope sender, that is in “MAIL FROM:”. If an SPF record exists, it checks if the IP address of the sender is in the allowed set, and if so, the SPF test is passed.

The domain name that is checked is the “domain portion” of the “MAIL FROM” identity (see RFC7208 section 4.1), or in other words, everything after the “@” character of the MAIL FROM. Or so it’s commonly understood: The RFC doesn’t define this term.

The receiver is likely to perform the same check on the HELO/EHLO identification of the sender. In fact, RFC7208 section 2.3 recommend performing it even before the MAIL FROM check. The SPF test will pass if either of the HELO/EHLO or the MAIL FROM check passes (the RFC doesn’t say this explicitly, but it’s clear from the argument for beginning with the more definite HELO/EHLO check).

This is important: Any mail server can ensure all mails that go through it pass the (non-DMARC) SPF test, just by having a DNS record on its full HELO/EHLO domain name. It’s silly not to have one. So if you’re setting up a mail server called, be sure to add SPF records for, allowing the IP of that server. This SPF test won’t count for DMARC purposes, but the “Received-SPF: pass” line in among the mail headers surely doesn’t hurt.Except for when DMARC is applied in one of its enforcing modes, there is no clear rule on what to do if this test fails or passes with one of the SMTP tokens or both. This is raw material for the spam detection software.

It’s however important to note that it’s perfectly normal that envelope address is made up completely by the mail relay, because it functions as a bounce address. So an email sent from may have the same envelope address, but it’s also perfectly normal that the MAIL FROM: would be This allows the ISP to detect massive bouncing of emails, and possibly do something about it. In this case, the relaying server’s domain can be used to pass the (non-DMARC) SPF test instead.

Well, with the reputation per domain rationale, it actually does makes sense. But with DMARC, this won’t cut. The SPF record must belong to the “From:” sender. See below.

Now, the formal rules are nice, but if you just wrote a spam filter, would you check for the SPF record of the “From:” sender’s domain, even though it’s not really relevant according to the RFC? Of course you would. If the domain owner of the Author Address has given permission for a server to relay emails on its behalf, it’s a much stronger indication. So it’s probably a good idea to make such a record, even if makes no sense directly. And it makes you better prepared for DMARC.

As a matter of fact, it’s recommended to add SPF records for any domain and subdomain that may somehow appear in the mail, to the extent possible, of course. A DNS record is cheap, and you never know if a spam detector expects it to be there, whether it should or not.

Bottom line: We don’t really know how many points spam filter X gives an SPF record of this type or another. It depends on the history of previous spam. So try to cover all options, even those that aren’t required per RFC.

Information on setting up an SPF record is all around the web. I suggest starting with Wikipedia’s great entry and if you want to be accurate about it, in RFC7208.


This is easiest explained through a real signature, taken from the header of a real mail message:

Dkim-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;;
 s=20161025;        h=mime-version:references:in-reply-to:from:date:message-id:subject:to;
 b= [ ... ]

To verify this signature, a lookup for the DNS TXT entry for is made.Note that except for the _domainkey part, the domain comprises of the s= (selector) and the d= (domain) assignments in the signature. The answer should contain an RSA public key for verifying that the hash of some selected headers (selected by h= ) is indeed signed by the blob in the b= assignment. That’s it. If the signature is OK, the DKIM test is passed.

Note that DKIM doesn’t (usually) sign the message body, so signing a message with DKIM doesn’t make you accountable for its content, only the fact that you sent that mail.

Also note that no other domains, that are related to the email, make any difference for passing the DKIM test itself. Not the sender’s not the mail relays’, nothing. Passing the DKIM test just means that the signing domain ( has signed this message (actually, some of its headers) and therefore puts its own reputation on it. It doesn’t say anything on who sent the message.

The common practice is however that the signing domain is the From: header domain. Probably because DMARC can’t be applied otherwise, maybe also because the goal is to impress spam filters. Passing the DKIM test is nice formally, but if the spam filter thinks it fishy, it can backlash.

As for Spamassassin, it doesn’t care much about DKIM so far. As of now, passing the DKIM test doesn’t change the score. Or more precisely, the existence of a DKIM signature increases the spam score (more spammy) with 0.1, but if the signature is correct, the score is reduced with 0.1. So we’re back to zero. If the signature belongs to the author (matching From: domain) the score is reduced (i.e. towards non-spam) by 0.1. All in all, a DKIM signature wins a score of 0.1 on Spamassassin. May not seem to be worth the efforts, but Spamassassin is not the only filter in the world. And it may change over time.

Another reason, from RFC4871, section 6.3: “If the message is signed on behalf of any address other than that in the From: header field, the mail system SHOULD take pains to ensure that the actual signing identity is clear to the reader.” Yeah right. I’ve seen Gmail verifying a DKIM signature of a domain which had nothing to do with anything in that message, surely not the sender. It just went “dkim=pass”.

Finally a question: The MUA (e.g. Thunderbird) is allowed to put a DKIM signature, which would actually make sense: It allows a human end user to sign the emails directly, with no need for anything special on the relaying infrastructure. And there’s no problem with multiple RSA key pairs for multiple users of the domain, since the “s=” selector allows a virtually unlimited number of DKIM DNS records. Why there isn’t a plugin for at least Thunderbird is unclear to me. Maybe the answer lies in Spamassassin’s indifferent response to it.


Suppose that you own company Example Ltd. with domain, and you’ve decided that all mails from (as in header From:) that domain will be DKIM signed. Now some spam mail arrives from someone else, without a DKIM signature and fails the SPF test. But the recipient has no way to tell that it should pass such tests.

DMARC is the mechanism that tells the recipient what to expect, and what to do if the expectation isn’t met. This allows the owner of the domain to ensure only mails arriving from its own machines are accepted. Spam pretending to come from its domain is dropped.

This is what Gmail did to force emails from all its users (i.e. having a address) to be relayed through their servers only. The TXT for goes:

"v=DMARC1; p=none; sp=quarantine;"

In other words, if it isn’t proven to come from Gmail’s server, hold the message. Most servers just junk it.

And now to how the test is done. Spoiler II: DMARC isn’t interested in a domain test if it isn’t tightly linked with the “From:” header’s domain. Or as they call it: Aligned with the RFC5322.From field. This is huge difference.

Let’s take it directly from RFC7489, section 4.2:

A message satisfies the DMARC checks if at least one of the supported authentication mechanisms:

  1. produces a “pass” result, and
  2. produces that result based on an identifier that is in alignment, as defined in Section 3.

The “supported authentication mechanisms” for DMARC version 1 are SPF and DKIM, as listed in section 4.1 of the same RFC.

The first thing we learn is that it’s enough to pass one of SPF or DKIM. No need to have both for passing DMARC.

Second, the term “is in alignment” above. It’s defined in the RFC itself, and essentially means that the domain for which the SPF or DKIM passed is the same as the one in the From: header, possibly give or take subdomains. The only reason they didn’t just say that the domains must be equal is because of the possibility of “relaxed mode”, allowing an email from to be approved by passing tests with the domain. This is what “being in alignment” means in relaxed mode. In “strict mode” alignment occurs only when they’re perfectly equal.

If the email passes the DMARC test, there isn’t much to fuss about. If it fails, the decision what to do depends on the policy, as given in the relevant domain. Which, according to RFC7489 section 3 is: “Author Domain: The domain name of the apparent author, as extracted from the RFC5322.From field”. And then in section 4.3, item 7: “The DMARC module attempts to retrieve a policy from the DNS for that domain” (referring to the Author Domain).

So it’s a DNS query for the TXT record of the From: domain, with the “_dmarc” subdomain prepended. As in the example above for

Finally, a tricky point. If a mail server, for which the SPF test is made, didn’t use the Author Domain in its MAIL FROM nor in the HELO/EHLO, the SPF test is worthless for DMARC purposes. It’s however quite tempting to check the Author Domain for its SPF record nevertheless. I mean, if the Author Domain allows the IP address of the mail relay server, isn’t it good enough to pass a DMARC test? Doing this goes against the SPF’s RFC, and isn’t mentioned in any way in DMARC’s RFC. But it makes a lot of sense. I won’t be surprised if it’s common practice already.

Will DMARC make my email delivery better?

TL;DR: Surprisingly enough, yes.

The irony about DMARC is that it bites on the spam messages, and does very little on the legit ones. After all, if an email passed both the SPF and DKIM tests on the Author Domain, what is there left to say?

And if the same email passed only one of the tests, why would a DMARC record add reassurance?

Of course, if you want to fake mails pretending to be you, definitely apply DMARC.

But once again, noone knows how spam filter X behaves. Maybe someone found out that DMARC signed domains carry less spam, and tuned the filter in favor of them. And maybe the rejection of spam mails thanks to the DMARC record helped with the domain’s spam statistics. Even though I would expect any machine that maintains statistics to count the emails that pass SPF / DKIM tests separately.

And here comes the big surprise. Gmail refused to accept messages from my server until I added a DMARC record. Once I did it, I was all welcome. It makes no sense, but somehow, Google seems to like the very existence of a DMARC record. Maybe a coincidence, most likely not. So do yourself a favor, and add a TXT record to

v=DMARC1; p=none; sp=none;

This record tells the recipient to do nothing with a mail message that fails the DMARC test, so it’s harmless. But it will send an email to tell you about it to the email address given. Which can be useful in itself.


There might be official rules for entering a club, but in the end of the day, you can’t know what the doorkeeper looks at. So try to get everything as tidy as possible, and hope you won’t be mistaken for the bad guys.

And don’t wait for the first time you won’t be let in. It might be too late to fix it then.

SMTP tidbits for the to-be postmaster

This is a quick overview of the parts of an SMTP session that are relevant to SPF and mail server setup.

Just a sample SMTP session

For a starter, this is what an ESMTP session between two mail servers talking on port 25 can look like (shamelessly copied from this post, which also shows how I obtained it).

"" <>... Connecting to [] via relay...
220 ESMTP Sendmail 8.14.4/8.14.4; Sat, 18 Jun 2016 11:05:26 +0300
>>> EHLO Hello localhost.localdomain [], pleased to meet you
250 HELP
>>> MAIL From:<> SIZE=864
250 2.1.0 <>... Sender ok
>>> RCPT To:<>
>>> DATA
250 2.1.5 <>... Recipient ok
354 Enter mail, end with "." on a line by itself
>>> .
250 2.0.0 u5I85QQq030607 Message accepted for delivery
"" <>... Sent (u5I85QQq030607 Message accepted for delivery)
Closing connection to []
>>> QUIT
221 2.0.0 closing connection


This is the first thing the client says after the server More precisely, it says something like


This self-introduction is important: The server knows your IP, and probably makes a quick rDNS check on it, to see if you’re making this domain up. So the domain given in HELO must be the same as in the rDNS record. Exactly.

It doesn’t matter if this domain has nothing to do with the domain of the actual From-sender. Or any other domain, for that matter. Relaying emails is normal. Not having the rDNS set up properly shouldn’t be.

Rumor has it that most mail servers will accept the message even if there’s no match, or even if there’s no rDNS record at all. And I’ve seen plenty of these myself. I’ve also had my server rejected because of this. It’s losing points on being lazy.

EHLO is like HELO, but indicates the start of an ESMTP session. For the purpose of the domain, it’s the same thing.


After the HELO introduction (and possibly some other stuff), the client goes something like:


The email address given is often referred to as the envelope sender, envelope-from or smtp.mailfrom.

In its simplest form (and as originally intended), this is the sender of the mail, copied from the “From:” header, as presented to the end user. But even more important, this is the address for bouncing the mail if it’s undeliverable. So one common trick, mostly used by mass relays, is to assign a long and tangled MAIL FROM: bounce addresses from which the relaying server can identify the message better.

The envelope sender appears as the “Return-Path:” header in mail messages as they are reach mailing boxes. Along the Received list in the mail headers, “envelope-from” tags often appear, indicating the envelope sender of the relevant leg.

This way or another, if you’re into SPF, then the SPF record must match the envelope sender, and not necessarily the From: sender. Even though it’s a good idea to cover both. Mail relays are a bit messy on what they check.


VRFY allows the client to check whether an email address is valid or not on the server. If it is valid, the server responds with a full address of the user.

This allows the client to scan through a range of addresses, and find one that is a valid recipient. Excellent for spammers, which is why this function is commonly unavailable today. For example:

252 Administrative prohibition

on another machine:

252 2.5.2 Cannot VRFY user; try RCPT to attempt delivery (or try finger)

EXPN is more or less the same, just with mailing lists: The client gives the name of the list, and gets the list of users. The common practice is not allowing this command. Even not those who allow VRFY despite its issues with spam.

If you’re setting up a mail server, disable this. It’s often enabled by default.

Perl, DBI and MySQL wrongly reads zeros from database

TL;DR: SELECT queries in Perl for numerical columns suddenly turned to zeros after a software upgrade.

This is a really peculiar problem I had after my web hosting provider upgraded some database related software on the server: Numbers that were read with SELECT queries from the database were suddenly all zeros.

Spoiler: It’s about running Perl in Taint Mode.

The setting was DBD::mysql version 4.050, DBI version 1.642, Perl v5.10.1, and MySQL Community Server version 5.7.25 on a Linux machine.

For example, the following script is supposed to write the number of lines in the “session” table:

#!/usr/bin/perl -T -w
use warnings;
use strict;
require DBI;

my $dbh = DBI->connect( "DBI:mysql:mydb:localhost", "mydb", "password",
		     { RaiseError => 1, AutoCommit => 1, PrintError => 0,
		       Taint => 1});

my $sth = $dbh->prepare("SELECT COUNT(*) FROM session");


my @l = $sth->fetchrow_array;
my $s = $l[0];
print "$s\n";


But instead, it prints zero, even though there are rows in the said table. Turning off taint mode by removing the “-T” flag in the shebang line gives the correct output. Needless to say, accessing the database with the “mysql” command-line utility client gave the correct output as well.

This is true for any numeric readout from this MySQL wrapper. This is in particular problematic when an integer is used as a user ID of a web site, and fetched with

my $sth = db::prepare_cached("SELECT id FROM users WHERE username=? AND passwd=?");
$sth->execute($name, $password);
my ($uid) = $sth->fetchrow_array;

If the credentials are wrong, $uid will be undef, as usual. But if any valid user gives correct credentials, it’s allocated user number 0. Which I was cautious enough not to allocate as the site’s supervisor, but that’s actually a common choice (what’s the UID of root on a Linux system?).

A softer workaround, instead of dropping the “-T” flag, is to set the TaintIn flag in the DBI->connect() call, instead of Taint. The latter stands for TaintIn and TaintOut, and so this fix effectively disables TaintOut, hence tainting of data from the database is disabled. And in this case, disabling tainting of this data also skips the zero-value bug. This leaves all other tainting checks in place, in particular that of data supplied from the network. So not enforcing sanitizing data from the database is a small sacrifice (in particular if the script already has mileage running with the enforcement on).

And in the end I wonder if I’m the only one who uses Perl’s tainting mechanism. I mean, if there are still (2019) advisories on SQL injections (mostly PHP scripts), maybe people just don’t care much about things of this sort.

Traces of a (failed, I hope) web server attack

I suddenly got the following line in public_html/error_log:

[06-Feb-2019 17:51:53] PHP Deprecated:  Automatically populating $HTTP_RAW_POST_DATA is deprecated and will be removed in a future version. To avoid this warning set 'always_populate_raw_post_data' to '-1' in php.ini and use the php://input stream instead. in Unknown on line 0

So I took a closer look on the logs: - - [06/Feb/2019:17:51:50 -0500] "POST /%25%7b(%23dm%3d%40ognl.OgnlContext%40DEFAULT_MEMBER_ACCESS).(%23_memberAccess%3f(%23_memberAccess%3d%23dm)%3a((%23container%3d%23context%5b%27com.opensymphony.xwork2.ActionContext.container%27%5d).(%23ognlUtil%3d%23container.getInstance(%40com.opensymphony.xwork2.ognl.OgnlUtil%40class)).(%23ognlUtil.getExcludedPackageNames().clear()).(%23ognlUtil.getExcludedClasses().clear()).(%23context.setMemberAccess(%23dm)))).(%23res%3d%40org.apache.struts2.ServletActionContext%40getResponse()).(%23res.addHeader(%27eresult%27%2c%27struts2_security_check%27))%7d/ HTTP/1.1" 500 2432 "-" "Auto Spider 1.0" - - [06/Feb/2019:17:51:51 -0500] "POST / HTTP/1.1" 200 4127 "-" "Auto Spider 1.0" - - [06/Feb/2019:17:51:52 -0500] "POST / HTTP/1.1" 200 4127 "-" "Auto Spider 1.0" - - [06/Feb/2019:17:51:53 -0500] "POST / HTTP/1.1" 200 4131 "-" "Auto Spider 1.0" - - [06/Feb/2019:17:52:14 -0500] "POST / HTTP/1.1" 200 4129 "-" "Auto Spider 1.0" - - [06/Feb/2019:17:52:15 -0500] "POST / HTTP/1.1" 200 4130 "-" "Auto Spider 1.0" - - [06/Feb/2019:17:52:18 -0500] "POST / HTTP/1.1" 200 4130 "-" "Auto Spider 1.0

Googling around for the first entry, which is obviously some kind of attack (partly because it’s a POST coming from nowhere), it looks like an attempt to exploit the Struts Remote Code Execution Vulnerability based upon this proof of concept for CVE-2017-9791.

The unpleasant thing to note is that the error message doesn’t relate to the first POST request, but to a later one. So maybe this attack went somewhere? Anyhow, it’s not my server, so I can’t do much about Apache’s configuration. Besides, other information I have seems to indicate that the attack didn’t manage to do anything.

Guess it’s just one of a gazillion attacks that go unnoticed, just this one created a line in my error log.

Replacing ntpd with systemd-timesyncd (Mint 18.1)


It all began when I noted that my media center Linux machine (Linux Mint 18.1, Serena) finished a TV recording a bit earlier than expected. Logging in and typing “date” I was quite surprised to find out that the time was off by half a minute.

The first question that comes to mind is why the time synchronization didn’t work. The second is, if it didn’t work, how come I hadn’t noted this issue earlier? The computer is in use as a media center for little less than two years.

What happened

It turns out (and it wasn’t easy to tell) that the relevant daemon was ntpd.

So what’s up, ntp?

$ systemctl status ntp
● ntp.service - LSB: Start NTP daemon
   Loaded: loaded (/etc/init.d/ntp; enabled; vendor preset: enabled)
   Active: active (exited) since Wed 2018-12-19 12:38:06 IST; 1 months 7 days ag
     Docs: man:systemd-sysv-generator(8)
  Process: 1257 ExecStop=/etc/init.d/ntp stop (code=exited, status=0/SUCCESS)
  Process: 1385 ExecStart=/etc/init.d/ntp start (code=exited, status=0/SUCCESS)

Dec 19 12:38:06 tv systemd[1]: Starting LSB: Start NTP daemon...
Dec 19 12:38:06 tv ntp[1385]:  * Starting NTP server ntpd
Dec 19 12:38:06 tv ntp[1385]:    ...done.
Dec 19 12:38:06 tv systemd[1]: Started LSB: Start NTP daemon.
Dec 19 12:38:06 tv ntpd[1398]: proto: precision = 0.187 usec (-22)
Dec 19 12:38:08 tv systemd[1]: Started LSB: Start NTP daemon.

Looks fairly OK. Maybe the logs can tell something?

$ journalctl -u ntp
Dec 19 12:38:02 tv systemd[1]: Stopped LSB: Start NTP daemon.
Dec 19 12:38:02 tv systemd[1]: Starting LSB: Start NTP daemon...
Dec 19 12:38:02 tv ntp[1055]:  * Starting NTP server ntpd
Dec 19 12:38:02 tv ntpd[1074]: ntpd 4.2.8p4@1.3265-o Wed Oct  5 12:34:45 UTC 2016 (1): Starting
Dec 19 12:38:02 tv ntpd[1076]: proto: precision = 0.175 usec (-22)
Dec 19 12:38:02 tv ntp[1055]:    ...done.
Dec 19 12:38:02 tv systemd[1]: Started LSB: Start NTP daemon.
Dec 19 12:38:02 tv ntpd[1076]: Listen and drop on 0 v6wildcard [::]:123
Dec 19 12:38:02 tv ntpd[1076]: Listen and drop on 1 v4wildcard
Dec 19 12:38:02 tv ntpd[1076]: Listen normally on 2 lo
Dec 19 12:38:02 tv ntpd[1076]: Listen normally on 3 lo [::1]:123
Dec 19 12:38:02 tv ntpd[1076]: Listening on routing socket on fd #20 for interface updates
Dec 19 12:38:03 tv ntpd[1076]: error resolving pool Temporary failure in name resolution (-3)
Dec 19 12:38:04 tv ntpd[1076]: error resolving pool Temporary failure in name resolution (-3)
Dec 19 12:38:05 tv ntpd[1076]: error resolving pool Temporary failure in name resolution (-3)
Dec 19 12:38:06 tv systemd[1]: Stopping LSB: Start NTP daemon...
Dec 19 12:38:06 tv ntp[1257]:  * Stopping NTP server ntpd
Dec 19 12:38:06 tv ntp[1257]:    ...done.
Dec 19 12:38:06 tv systemd[1]: Stopped LSB: Start NTP daemon.
Dec 19 12:38:06 tv systemd[1]: Stopped LSB: Start NTP daemon.
Dec 19 12:38:06 tv systemd[1]: Starting LSB: Start NTP daemon...
Dec 19 12:38:06 tv ntp[1385]:  * Starting NTP server ntpd
Dec 19 12:38:06 tv ntp[1385]:    ...done.
Dec 19 12:38:06 tv systemd[1]: Started LSB: Start NTP daemon.
Dec 19 12:38:06 tv ntpd[1398]: proto: precision = 0.187 usec (-22)
Dec 19 12:38:08 tv systemd[1]: Started LSB: Start NTP daemon.

Hmmm… There is some kind of trouble there, but it was surely resolved. Or? In fact, there was no ntpd process running, so maybe it just died?

Let’s try to restart the daemon, and see what happens. As root,

# systemctl restart ntp

after which the log went

Jan 26 20:36:46 tv systemd[1]: Stopping LSB: Start NTP daemon...
Jan 26 20:36:46 tv ntp[32297]:  * Stopping NTP server ntpd
Jan 26 20:36:46 tv ntp[32297]: start-stop-daemon: warning: failed to kill 1398: No such process
Jan 26 20:36:46 tv ntp[32297]:    ...done.
Jan 26 20:36:46 tv systemd[1]: Stopped LSB: Start NTP daemon.
Jan 26 20:36:46 tv systemd[1]: Starting LSB: Start NTP daemon...
Jan 26 20:36:46 tv ntp[32309]:  * Starting NTP server ntpd
Jan 26 20:36:46 tv ntp[32309]:    ...done.
Jan 26 20:36:46 tv systemd[1]: Started LSB: Start NTP daemon.
Jan 26 20:36:46 tv ntpd[32324]: proto: precision = 0.187 usec (-22)
Jan 26 20:36:46 tv ntpd[32324]: Listen and drop on 0 v6wildcard [::]:123
Jan 26 20:36:46 tv ntpd[32324]: Listen and drop on 1 v4wildcard
Jan 26 20:36:46 tv ntpd[32324]: Listen normally on 2 lo
Jan 26 20:36:46 tv ntpd[32324]: Listen normally on 3 enp3s0
Jan 26 20:36:46 tv ntpd[32324]: Listen normally on 4 lo [::1]:123
Jan 26 20:36:46 tv ntpd[32324]: Listen normally on 5 enp3s0 [fe80::f757:9ceb:2243:3e16%2]:123
Jan 26 20:36:46 tv ntpd[32324]: Listening on routing socket on fd #22 for interface updates
Jan 26 20:36:47 tv ntpd[32324]: Soliciting pool server
Jan 26 20:36:48 tv ntpd[32324]: Soliciting pool server
Jan 26 20:36:49 tv ntpd[32324]: Soliciting pool server
Jan 26 20:36:50 tv ntpd[32324]: Soliciting pool server
Jan 26 20:36:30 tv ntpd[32324]: Soliciting pool server
Jan 26 20:36:30 tv ntpd[32324]: Soliciting pool server
Jan 26 20:36:31 tv ntpd[32324]: Soliciting pool server
Jan 26 20:36:31 tv ntpd[32324]: Soliciting pool server
Jan 26 20:36:49 tv ntpd[32324]: Soliciting pool server
Jan 26 20:36:49 tv ntpd[32324]: Soliciting pool server

Aha! So this is what a kickoff of ntpd should look like! Clearly ntpd didn’t recover all that well from the lack of internet connection (I suppose) during the media center’s bootup. Maybe it died, and was never restarted. The irony is that systemd has a wonderful mechanism for restarting failing daemons, but ntpd is still under the backward-compatible LSB interface. So the system silently remained with no time synchronization.

Go the systemd way

systemd supplies its own lightweight time synchronization mechanism, systemd-timesyncd. It makes much more sense, as it doesn’t open NTP ports as a server (like ntpd does, one may wonder what for), but just synchronizes the computer it runs on to the remote NTP server. And judging from my previous experience with systemd, in the event of multiple solutions, go for the one systemd offers. In fact, it’s sort-of enabled by default:

$ systemctl status systemd-timesyncd
● systemd-timesyncd.service - Network Time Synchronization
   Loaded: loaded (/lib/systemd/system/systemd-timesyncd.service; enabled; vendor preset: enabled)
  Drop-In: /lib/systemd/system/systemd-timesyncd.service.d
   Active: inactive (dead)
Condition: start condition failed at Wed 2018-12-19 12:38:01 IST; 1 months 7 days ago
           ConditionFileIsExecutable=!/usr/sbin/VBoxService was not met
     Docs: man:systemd-timesyncd.service(8)

Start condition failed? What’s this? Let’s look at the drop-in file:

$ cat /lib/systemd/system/systemd-timesyncd.service.d/disable-with-time-daemon.conf
# don't run timesyncd if we have another NTP daemon installed

Oh please, you can’t be serious. Disabling the execution because of the existence of a file? If another NTP daemon is installed, does it mean it’s being enabled? In particular, if VBoxService is installed, does it mean we’re running as guests on a virtual machine? Like, seriously, someone might just install the Virtual Box client tools for no reason at all, and poof, there goes the time synchronization without any warning (note that this wasn’t the problem I had).

Moving to systemd-timesyncd

As mentioned earlier, systemd-timesyncd is enabled by default, but one may insist:

# systemctl enable systemd-timesyncd.service

(Nothing response, because it’s enabled anyhow)

However in order to make it work, remove the condition that prevents it from running:

# rm /lib/systemd/system/systemd-timesyncd.service.d/disable-with-time-daemon.conf

and then disable and stop ntpd:

# systemctl disable ntp
# systemctl stop ntp

On my computer, the other two time synchronizing tools (openntpd and chrony) aren’t installed, so they are not to worry about.

And then we have timedatectl

Note directly related, and still worth mentioning

$ timedatectl
      Local time: Sat 2019-01-26 21:22:57 IST
  Universal time: Sat 2019-01-26 19:22:57 UTC
        RTC time: Sat 2019-01-26 19:22:57
       Time zone: Asia/Jerusalem (IST, +0200)
 Network time on: yes
NTP synchronized: yes
 RTC in local TZ: no

Systemd is here to take control of everything, obviously.

Cinelerra 2019 notes

Cinelerra is alive and kicking. I’ve recently downloaded the “goodguy” revision of Cinelerra, Nov 29 2018 build (called “cin” for some reason), which is significantly smoother than the tool I’m used to.


  • There are now two ways to set the effects’ attributes. With the good-old magnifying glass (“Controls”) in with the gear (“Presets”), which gives a textual interface
  • Unlike what I’m used to, the Controls only set the global parameters, even with “Genererate keyframes while tweeking” on (spelling as used in Cinelerra).
  • In order to create keyframes, enable “Generate keyframes” and go for the “gear” tool. That isn’t much fun, because the settings are manual.
  • If the Controls are used, all keyframes get the value.
  • When rendering, there are presets. For a decent MP4 output, go for FFMPEG / mp4 fileformat, then the h264.mp4 preset for audio (Bitrate = 0, Quality = -1, Samples fltp) and same for video (Bitrate = 0, Quality = -1, Pixels = yuv420p and video options keyint_min=25 and x264opts keyint=25).

Solved: Missing ktorrent icon on Linux Mint / Cinnamon

Running ktorrent on Linux Mint 19 (Tara), the famous downwards-arrow icon was invisible on the system tray. Which made it appear like the program had quit when it was actually minimized. Clicking the empty box made ktorrent re-appear.

Solution: Invoke the Qt5 configuration tool

$ qt5ct

and under the Appearance tab set “Style” to gtk2 (I believe it was “Fusion” before). It’s not just prettier generally, but after restarting ktorrent, the icon is there.

Actually, it’s probably not about the style, but the fact that qt5ct was run. Because before making the change, the ktorrent printed out the following when launched from the command line:

Mon Dec 24 09:52:55 2018: Qt Warning: QSystemTrayIcon::setVisible: No Icon set
Warning: QSystemTrayIcon::setVisible: No Icon set
Mon Dec 24 09:52:55 2018: Starting minimized
Mon Dec 24 09:52:55 2018: Started update timer
Mon Dec 24 09:52:55 2018: Qt Warning: inotify_add_watch("/home/eli/.config/qt5ct") failed: "No such file or directory"
Warning: inotify_add_watch("/home/eli/.config/qt5ct") failed: "No such file or directory"

The “No Icon set” warning is misleading, because it continued to appear. This is after the fix, with the icon properly in place in the tray:

Mon Dec 24 10:16:17 2018: Qt Warning: QSystemTrayIcon::setVisible: No Icon set
Warning: QSystemTrayIcon::setVisible: No Icon set

Anyhow, problem fixed. For me, that is.

And why ktorrent? Because its last reported vulnerability was in 2009, compared with “Transmission” which had a nasty issue in January 2018. Actually, the exploit in Transmission is interesting by itself, with a clear lesson: If you set up a webserver on the local host for any purpose, assume anyone can access it. Setting it to respond to only doesn’t help.


Writing a panel applet for Cinnamon: The basics


What I wanted: A simple applet on Cinnamon, which allows me to turn a service on and off (hostapd, a Wifi hotspot). I first went for Argos catch-all extension, and learned that Cinnamon isn’t gnome-shell, and in particular that extensions for gnome-shell don’t (necessarily?) work with Cinnamon.

Speaking of which, my system is Linux Mint 19 on an x86_64, with

$ cinnamon --version
Cinnamon 3.8.9

So I went for writing the applet myself. Given the so-so level of difficulty, I should have done that to begin with.

Spoiler: I’m not going to dive into the details of that, because my hostapd-firewall-DHCP daemon setting is quite specific. Rather, I’ll discuss about some general aspects of writing an applet.

So what is it like? Well, quite similar to writing something useful in JavaScript for a web page. Cinnamon’s applets are in fact written in JavaScript, and it feels pretty much the same. In particular, this thing about nothing happening when there’s an error, now go figure what it was. And yes, there’s an error log console which helps with syntax errors (reminds browsers’ error log, discussed below) but often run-time errors just lead to nothing. A situation that is familiar to anyone with JavaScript experience.

And I also finally understand why the cinnamon process hogs CPU all the time. OK, it’s usually just a few percents, and still, what is it doing all that time with no user activity? Answer: Running some JavaScript, I suppose.

But all in all, if you’re good with JavaScript and understand the concepts of GUI programming and events + fairly OK with object oriented programming, it’s quite fun. And there’s another thing you better be good at:

Read The Source

As of December 2018, the API for Cinnamon applets is hardly documented, and it’s somewhat messy. So after reading a couple of tutorials (See “References” at the bottom of this post), the best way to grasp how to get X done is by reading the sources of existing applets:

  • System-installed: /usr/share/cinnamon/applets
  • User-installed: ~/.local/share/cinnamon/applets
  • Cinnamon’s core JavaScript sources: /usr/share/cinnamon/js

Each of these contains several subdirectories, typically with the form name@creator, one for each applet that is available for adding to the panels. Each of these has at least two files, which are also those to supply for your own applet:

  • metadata.json, which contains some basic info on the applet (probably used while selecting applets to add).
  • applet.js, which contains the JavaScript code for the applet.

It doesn’t matter if they’re executable, even though they often are.

There may also be additional *.js files.

Also, there might also be a po/ directory, which often contains .po and .pot files that are intended for localizing the text displayed to the user. These go along with the _() function in the JavaScript code. For the purposes of a simple applet, these are not necessary. Ignore these _(“Something”) things in the JavaScript code, and read them as just “Something”.

Some applets allow parameter setting. The runtime values for these are at ~/.cinnamon, which contains configuration data etc.

Two ways to object orient

Unfortunately, there are two styles for defining the applet class, both of which are used. This is a matter of minor confusion if you read the code of a few applets, and therefore worthy to note: Some of the applets use JavaScript class declarations (extending a built-in class), e.g.

class CinnamonSoundApplet extends Applet.TextIconApplet {
    constructor(metadata, orientation, panel_height, instanceId) {
        super(orientation, panel_height, instanceId);

and others use the “prototype” syntax:

MyApplet.prototype = {
  __proto__: Applet.IconApplet.prototype,

and so on. I guess they’re equivalent, despite the difference in syntax. Note that in the latter format, the constructor is a function called _init().

This way or another, all classes that employ timeout callbacks should have a destroy() method (no underscore prefix) to cancel them before quitting.

I wasn’t aware of these two syntax possibilities, and therefore started from the first applet I got my hands on. It happened to be written in the “prototype” syntax, which is probably the less preferable choice. I’m therefore not so sure my example below is a good starter.

Getting Started

It’s really about three steps to get an applet up and running.

  • Create a directory in ~/.local/share/cinnamon/applets/ and put the two files there: metadata.json and applet.js.
  • Restart Cinnamon. No, it’s not as bad as it sounds. See below.
  • Install the applet to some panel, just like any other applet.
I warmly suggest copying an existing applet and hacking it. You can start with the skeleton applet I’ve listed below, but there are plenty other available on the web, in particular along with tutorials.

The development cycle (or: how to “run”)

None of the changes made in the applet’s directory (well, almost none) take any effect until Cinnamon is restarted, and when it is, everything is in sync. It’s not like a reboot, and it’s fine to do on the computer you’re working on, really. All windows remain in their workspaces (even though the windows’ tabs at the panel may change order). No reason to avoid this, even if you have a lot of windows opened. Done it a gazillion times.

So how to restart Cinnamon: ALT-F2, type “r” and Enter. Then cringe as your desktop fades away and be overwhelmed when it returns, and nothing bad happened.

If something is wrong with your applet (or otherwise), there a notification saying “Problems during Cinnamon startup” elaborating that “Cinnamon started successfully, but one or more applets, desklets or extensions failed to load”. From my own experience, that’s as bad as it gets: The applet wasn’t loaded, or doesn’t run properly.

Press Win+L (or ALT-F2, then type “lg” and Enter, or type “cinnamon-looking-glass” at shell prompt as non-root user) to launch the Looking Glass tool (called “Melange”). The Log tab is helpful with detailed error messages (colored red, that helps). Alternatively, look for the detailed error message in .xsession-errors in your home directory.

Note that the error message often appears before the line saying that the relevant applet was loaded.

OK, so now to some more specific topics.

Custom icons

Icons are referenced by their file name, without extension, in the JavaScript code as well as the metadata.json file (as “icon” assignment). The search path is the applet’s own icons/ subdirectory and the system icons, present at /usr/share/icons/.

My own experience is that creating an icons/ directory side-by-side with applet.js, and putting a PNG file named wifi-icon-off.png there makes a command like


work for setting the applet’s main icon on the panel. The PNG’s transparency is honored. The official file format is SVG, but who’s got patience for that.

Same goes with something menu items with icons:

item = new PopupMenu.PopupIconMenuItem("Access point off", "wifi-icon-off", St.IconType.FULLCOLOR);

item.connect('activate', Lang.bind(this, function() {
   Main.Util.spawnCommandLine("/usr/local/bin/access-point-ctl off");

My own experience with the menu items is that if the icon file isn’t found, Cinnamon silently puts an empty slot instead. JavaScript-style no fussing.

I didn’t manage to achieve something similar with the “icon” assignment in metadata.json, so the choices are either to save the icon in /usr/share/icons/, or use one of the system icons, or eliminate the “icon” assignment altogether from the JSON file. I went to the last option. This resulted in a dull default icon when installing the applet, but this is of zero importance for an applet I’ve written myself.

Running shell commands from JavaScript

The common way to execute a shell command is e.g.

const Main = imports.ui.main;


The assignment of Main is typically done once, and at the top of the script, of course.

When the output of the command is of interest, it becomes slightly more difficult. The following function implements the parallel of the Perl backtick operator: Run the command, and return the result as a string. Note that unlike its bash counterpart, newlines remain newlines, and are not translated into spaces:

const GLib =;

function backtick(command) {
  try {
    let [result, stdout, stderr] = GLib.spawn_command_line_sync(command);
    if (stdout != null) {
      return stdout.toString();
  catch (e) {

  return "";

and then one can go e.g.

let output = backtick("/bin/systemctl is-active hostapd");

after which output is a string containing the result of the execution (with a trailing newline, by the way).

As of December 2018, there’s no proper documentation of Cinnamon’s Glib wrapper, however the documentation of the C library can give an idea.

My example applet

OK, so here’s a skeleton applet for getting started with.

Its pros:

  • It’s short, quite minimal, and keeps the mumbo-jumbo to a minimum
  • It shows a simple drop-down menu display applet, which allows running a different shell command from each entry.
Its cons:
  • It’s written in the less-preferable “prototype” syntax for defining objects.
  • It does nothing useful. In particular, the shell commands it executes exist only on my computer.
  • It depends on a custom icon (see “Custom Icons” above). Maybe this is an advantage…?

So if you want to give it a go, create a directory named ‘wifier@eli’ (or anything else?) in ~/.local/share/cinnamon/applets/, and put this as metadata.json:

    "description": "Turn Wifi Access Point on and off",
    "uuid": "wifier@eli",
    "name": "Wifier"

And this as applet.js:

const Applet = imports.ui.applet;
const Lang = imports.lang;
const St =;
const Main = imports.ui.main;
const PopupMenu = imports.ui.popupMenu;
const UUID = 'wifier@eli';

function ConfirmDialog(){

function MyApplet(orientation, panelHeight, instanceId) {
  this._init(orientation, panelHeight, instanceId);

MyApplet.prototype = {
  __proto__: Applet.IconApplet.prototype,

  _init: function(orientation, panelHeight, instanceId) {, orientation, panelHeight, instanceId);

    try {
      this.set_applet_tooltip("Control Wifi access point");

      this.menuManager = new PopupMenu.PopupMenuManager(this); = new Applet.AppletPopupMenu(this, orientation);

      this._contentSection = new PopupMenu.PopupMenuSection();;

      // First item: Turn on
      let item = new PopupMenu.PopupIconMenuItem("Access point on", "wifi-icon-on", St.IconType.FULLCOLOR);

      item.connect('activate', Lang.bind(this, function() {
					   Main.Util.spawnCommandLine("/usr/local/bin/access-point-ctl on");

      // Second item: Turn off
      item = new PopupMenu.PopupIconMenuItem("Access point off", "wifi-icon-off", St.IconType.FULLCOLOR);

      item.connect('activate', Lang.bind(this, function() {
					   Main.Util.spawnCommandLine("/usr/local/bin/access-point-ctl off");
    catch (e) {

  on_applet_clicked: function(event) {;

function main(metadata, orientation, panelHeight, instanceId) {
  let myApplet = new MyApplet(orientation, panelHeight, instanceId);
  return myApplet;

Next, create an “icons” subdirectory (e.g. ~/.local/share/cinnamon/applets/wifier@eli/icons/) and put a small (32 x 32 ?) PNG image there as wifi-icon-off.png, which functions as the applet’s top icon. Possibly download mine from here.

Anyhow, be sure to have an icon file. Otherwise there will be nothing on the panel.

Finally, restart Cinnamon, as explained above. You will get errors when trying the menu items (failed execution), but don’t worry — nothing bad will happen.


Failed: Install Argos Shell Extension on Cinnamon

You have been warned

These are my pile of jots as I tried to install Argos “Gnome Shell Extension in seconds” on my Mint 19 Cinnamon machine. As the title implies, it didn’t work out, so I went for writing an applet from scratch, more or less.

Not being strong on Gnome internals, I’m under the impression that it’s simply because Cinnamon isn’t Gnome shell. This post is just the accumulation of notes I took while trying. Nothing to follow step-by-step, as it leads nowhere.

It’s here for the crumbs of info I gathered nevertheless.

Here we go

It says on the project’s Github page that a recent version of Gnome should include Argos. So I went for it:

# apt install gnome-shell-extensions

And since I’m at it:

# apt install gnome-tweaks

Restart Gnome shell: ALT-F2, type “r” and enter. For a second, it looks like a logout, but everything returns to where it was. Don’t hesitate doing this, even if there are a lot of windows opened.

Nada. So I went the manual way. First, found out my Gnome Shell version:

$ apt-cache show gnome-shell | grep Version

or better,

$ gnome-shell --version
GNOME Shell 3.28.3

and downloaded the extension for Gnome shell 3.28 from Gnome’s extension page. Then realized it’s slightly out of date with the git repo, so

$ git clone
$ cd argos
$ cp -r '' ~/.local/share/cinnamon/extensions/

Note that I copied it into cinnamon’s subdirectory. It’s usually ~/.local/share/gnome-shell/extensions, but not when running Cinnamon!

Restart Gnome shell again: ALT-F2, type “r” and enter.

Then open the “Extensions” GUI thingy from the main menu. Argos extension appears. Select it and press the “+” button to add it.

Restart Gnome shell again. This time a notification appears, saying “Problems during Cinnamon startup” elaborating that “Cinnamon started successfully, but one or more applets, desklets or extensions failed to load”.

Looking at ~/.xsession-errors, I found

Cjs-Message: 12:14:42.822: JS LOG: [LookingGlass/error] []: Missing property "cinnamon-version" in metadata.json

Can’t argue with that, can you? Let’s see:

$ cinnamon --version
Cinnamon 3.8.9

So edit ~/.local/share/cinnamon/extensions/ and add the line marked in red (not at the end, because the last line doesn’t end with a comma):

  "version": 2,
  "cinnamon-version": [ "3.6", "3.8", "4.0" ],
  "shell-version": ["3.14", "3.16", "3.18", "3.20", "3.22", "3.24", "3.26", "3.28"]

I took this line from some applet I found under ~/.local/share/cinnamon. Not much thought given here.

And guess what? Reset again with ALT-F2 r. Failed again. Now in ~/.xsession-errors:

Cjs-Message: 12:50:32.643: JS LOG: [LookingGlass/error]
[]: No JS module 'extensionUtils' found in search path
[]: Error importing extension.js from

It seems like Cinnamon has changed the extension mechanism altogether, which explains why there’s no extension tab in Gnome Tweaker, and why extensionUtils is missing.

Maybe this explains it. Frankly, I didn’t bother to read that long discussion, but the underlying issue is probably buried there.

Setting PS1 with color codes properly with gnome-terminal

There are plenty of web pages describing the different escape codes for changing color of an ANSI emulating terminal. This is in particular useful for giving the shell prompt different colors, to prevent confusion between different computers, for example.

The trick is to set the PS1 bash variable. What is less told, is that \[ and \] tokens must enclose the color escape sequence, or things get completely crazy: Pasting into the terminal creates junk, newlines aren’t interpreted properly and a lot of other peculiarities with the cursor jumping to the wrong place all the time.

So this is wrong (just the color escape sequence, no enclosure):

PS1="\e[44m[\u@here \W]\e[m\\$ "

And this is right:

PS1="\[\e[44m\][\u@here \W]\[\e[m\]\\$ "

Even the “wrong” version about will produce the correct colors, but as mentioned above, other weird stuff happens.