The given URLs are "overlayed" according to their priority, and they get a name (to ease updating only parts).
Such an extended URL has the form
["name:"{name},]["target:"{t-rev},]["prio:"{prio},]URL
http://....
, svn://...
or svn+ssh://...
.The arguments before the URL are optional and can be in any order; the URL must be last.
Example:
name:perl,prio:5,svn://...
N:perl,P:5,T:324,svn://...
Please mind that the full syntax is in lower case, whereas the abbreviations are capitalized!
Internally the : is looked for, and if the part before this character is a known keyword, it is used.
As soon as we find an unknown keyword we treat it as an URL, ie. stop processing.
The priority is in reverse numeric order - the lower the number, the higher the priority. (See url__current_has_precedence()
)
You should only use alphanumeric characters and the underscore here; or, in other words, \w
or [a-zA-Z0-9_]. (Whitespace, comma and semicolon get used as separators.)
-r
parameter; this sets the destination for all URLs.
The default target is HEAD
.
URL@revision
- this syntax may be implemented in fsvs too. (But it has the problem, that as soon as you have a @
in the URL, you must give the target revision everytime!)
COLUMNS=200 dpkg-query -l | cut -c5- | cut -f1 -d" " | wc
Multiplied with 150 000 entries we get about 1MB difference in filesize of the dir-file. Not really small ...
Currently we use about 92 bytes per entry. So we'd (unnecessarily) increase the size by about 10%.
That's why there's an url_t::internal_number.