The given URLs are "overlayed" according to their priority, and they get a name (to ease updating only parts).
Such an extended URL has the form
The arguments before the URL are optional and can be in any order; the URL must be last.
Please mind that the full syntax is in lower case, whereas the abbreviations are capitalized!
: is looked for, and if the part before this character is a known keyword, it is used.
As soon as we find an unknown keyword we treat it as an URL, ie. stop processing.
The priority is in reverse numeric order - the lower the number, the higher the priority. (See
You should only use alphanumeric characters and the underscore here; or, in other words,
[a-zA-Z0-9_]. (Whitespace, comma and semicolon get used as separators.)
-rparameter; this sets the destination for all URLs.
The default target is
URL@revision- this syntax may be implemented in fsvs too. (But it has the problem, that as soon as you have a
@in the URL, you must give the target revision everytime!)
COLUMNS=200 dpkg-query -l | cut -c5- | cut -f1 -d" " | wc
Multiplied with 150 000 entries we get about 1MB difference in filesize of the dir-file. Not really small ...
Currently we use about 92 bytes per entry. So we'd (unnecessarily) increase the size by about 10%.
That's why there's an url_t::internal_number.