Showing posts with label unix&linux. Show all posts
Showing posts with label unix&linux. Show all posts
0

How to correctly add a path to PATH?

I'm wondering where a new path has to be added to PATH environment variable. I know this is accomplished editing .bash_rc (for example), but it's not clear how to do this.
This way:
export PATH=~/opt/bin:$PATH
or this?
export PATH=$PATH:~/opt/bin
Question 2 (related). What's a workable way to append more paths on different lines? Initially I thought this could do the trick:
export PATH=$PATH:~/opt/bin
export PATH=$PATH:~/opt/node/bin
but it doesn't because the second assignment doesn't only append ~/opt/node/bin, but also the whole PATH previously assigned.
This is a possible workaround:
export PATH=$PATH:~/opt/bin:~/opt/node/bin
but for readability I'd prefer to have one assignment for one path.

ANSWER:-

Either way works, but they don't do the same thing: the elements of PATHare checked left to right. In your first example, executables in ~/opt/bin will have precedence over those installed, for example, in /usr/bin, which may or may not be what you want.
In particular, from a safety point of view, it is dangerous to add paths to the front, because if someone can gain write access to your ~/opt/bin, they can put, for example, a different ls in there, which you'd then probably use instead of /bin/ls without noticing. Now imagine the same for ssh or your browser or choice... (The same goes triply for putting . in your path.)
Read
0

How to have tail -f show colored output

Try out multitail. This is an übergeneralization of tail -f. You can watch multiple files in separate windows, highlight lines based on their content, and more.
multitail -c /path/to/log
The colors are configurable. If the default color scheme doesn't work for you, write your own in the config file. For example, call multitail -cS amir_log /path/to/log with the following~/.multitailrc:
colorscheme:amir_log
cs_re:green:INFO
cs_re:red:SEVERE
Another solution, if you're on a server where it's inconvenient to install non-standard tools, is to combine tail -f with sed or awk to add color selection control sequences. This requires tail -fto flush its standard output without delay even when its standard output is a pipe, I don't know if all implementations do this.
tail -f /path/to/log | awk '
  /INFO/ {print "\033[32m" $0 "\033[39m"}
  /SEVERE/ {print "\033[31m" $0 "\033[39m"}
'
Yet another possibility is to run tail -f in an Emacs shell buffer and use Emacs's syntax coloring abilities.
Read
0

Does (should) LC_COLLATE affect character ranges?

Collation order through LC_COLLATE defines not only the sort order of individual characters, but also the meaning of character ranges. Or does it? Consider the following snippet:
unset LANGUAGE LC_ALL
echo B | LC_COLLATE=en_US grep '[a-z]'
Intuitively, B isn't in [a-z], so this shouldn't output anything. That's what happens on Ubuntu 8.04 or 10.04. But on some machines running Debian lenny or squeeze, B is found, because the range a-zincludes everything that's between a and z in the collation order, including the capital letters Bthrough Z.
All systems tested do have the en_US locale generated. I also tried varying the locale: on the machines where B is matched above, the same happens in every available locale (mostly latin-based: {en_{AU,CA,GB,IE,US},fr_FR,it_IT,es_ES,de_DE}{iso8859-1,iso8859-15,utf-8}, also Chinese locales) except Japanese (in any available encoding) and C/POSIX.
What do character ranges mean in regular expressions, when you go beyond ASCII? Why is there a difference between some Debian installations on the one hand, and other Debian installations and Ubuntu on the other? How do other systems behave? Who's right, and who should have a bug reported against?
(Note that I'm specifically asking about the behavior of character ranges such as [a-z] in en_USlocales, primarily on GNU libc-based systems. I'm not asking how to match lowercase letters or ASCII lowercase letters.)

On two Debian machines, one where B is in [a-z] and one where it isn't, the output ofLC_COLLATE=en_US locale -k LC_COLLATE is
collate-nrules=4
collate-rulesets=""
collate-symb-hash-sizemb=1
collate-codeset="ISO-8859-1"
and the output of LC_COLLATE=en_US.utf8 locale -k LC_COLLATE is
collate-nrules=4
collate-rulesets=""
collate-symb-hash-sizemb=2039
collate-codeset="UTF-8"

ANSWER:-

If you are using anything other than the C locale, you shouldn't be using ranges like [a-z] since these are locale-dependent and don't always give the results you would expect. As well as the case issue you've already encountered, some locales treat characters with diacritics (eg á) the same as the base character (ie a).
Instead, use a named character class:
echo B | grep '[[:lower:]]'
This will always give the correct result for the locale. However, you need to choose the locale to reflect the meaning of both your input text and the test you are trying to apply.
Read
0

Any way to sync directory structure when the files are already on both sides?

I have two drives with the same files, but the directory structure is totally different.
Is there any way to 'move' all the files on the destination side so that they match the structure of the source side? With a script perhaps?
For example, drive A has:
/foo/bar/123.txt
/foo/bar/234.txt
/foo/bar/dir/567.txt
Whereas drive B has:
/some/other/path/123.txt
/bar/doo2/wow/234.txt
/bar/doo/567.txt
The files in question are huge (800GB), so I don't want to re-copy them; I just want to sync the structure by creating the necessary directories and moving the files.
I was thinking of a recursive script that would find each source file on the destination, then move it to a matching directory, creating it if necessary. But that's beyond my abilities ...

ANSWER:-

I'll go with Gilles and point you to Unison as suggested by hasen j. Unison was DropBox 20 years before DropBox. Rock solid code that a lot of people (myself included) use every day -- very worthwhile to learn. Still, join needs all the publicity it can get :)

This is only half an answer, but I have to get back to work :)
Basically, I wanted to demonstrate the little-known join utility which does just that: joins two tables on a some field.
First, set up a test case including file names with spaces:
for d in a b 'c c'; do mkdir -p "old/$d"; echo $RANDOM > "old/${d}/${d}.txt"; done
cp -r old new
(edit some directory and/or file names in new).
Now, we want to build a map: hash -> filename for each directory and then use join to match up files with the same hash. To generate the map, put the following in makemap.sh:
find "$1" -type f -exec md5 -r "{}" \; \
  | sed "s/\([a-z0-9]*\) ${1}\/\(.*\)/\1 \"\2\"/" \
makemap.sh spits out a file with lines of the form, 'hash "filename"', so we just join on the first column:
join <(./makemap.sh 'old') <(./makemap.sh 'new') >moves.txt
This generates moves.txt which looks like this:
49787681dd7fcc685372784915855431 "a/a.txt" "bar/a.txt"
bfdaa3e91029d31610739d552ede0c26 "c c/c c.txt" "c c/c c.txt"
The next step would be to actually do the moves, but my attempts got stuck on quoting... mv -i andmkdir -p should come handy.
Read
0

Why can't a normal user `chown` a file?

Most unix systems prevent users from “giving away” files, that is, users may only run chown if they have the target user and group privileges. Since using chown requires owning the file or being root (users can never appropriate other users' files), only root can run chown to change a file's owner to another user.
The reason for this restriction is that giving away a file to another user can allow bad things to happen in uncommon, but still important situations. For example:
  • If a system has disk quotas enabled, Alice could create a world-writable file under a directory accessible only by her (so no one else could access that world-writable directory), and then runchown to make that file owned by another user Bill. The file would then count under Bill's disk quota even though only Alice can use the file.
  • If Alice gives away a file to Bill, there is no trace that Bill didn't create that file. This can be a problem if the file contains illegal or otherwise compromising data.
  • Some programs require that their input file belongs to a particular user in order to authenticate a request (for example, the file contains some instructions that the program will perform on behalf of that user). This is usually not a secure design, because even if Bill created a file containing syntactically correct instructions, he might not have intended to execute them at this particular time. Nonetheless, allowing Alice to create a file with arbitrary content and have it taken as input from Bill can only make things worse.
Read
0

Why is the chown command root-only? Why can't non-root users use chown to give away files they own?

Most unix systems prevent users from “giving away” files, that is, users may only run chown if they have the target user and group privileges. Since using chown requires owning the file or being root (users can never appropriate other users' files), only root can run chown to change a file's owner to another user.
The reason for this restriction is that giving away a file to another user can allow bad things to happen in uncommon, but still important situations. For example:
  • If a system has disk quotas enabled, Alice could create a world-writable file under a directory accessible only by her (so no one else could access that world-writable directory), and then runchown to make that file owned by another user Bill. The file would then count under Bill's disk quota even though only Alice can use the file.
  • If Alice gives away a file to Bill, there is no trace that Bill didn't create that file. This can be a problem if the file contains illegal or otherwise compromising data.
  • Some programs require that their input file belongs to a particular user in order to authenticate a request (for example, the file contains some instructions that the program will perform on behalf of that user). This is usually not a secure design, because even if Bill created a file containing syntactically correct instructions, he might not have intended to execute them at this particular time. Nonetheless, allowing Alice to create a file with arbitrary content and have it taken as input from Bill can only make things worse.
Read
0

Can one PC be used by two users at the same time via dual-monitor?

Since modern PCs can use two screens at the same time I wonder if it is possible to plug in two keyboards and mouses as well to have the two screens run two (more or less) independent X-sessions at once?

ANSWER:-

In short, yes, this is possible. The relevant search string you are looking for is "Multi-seat X".
The Ubuntu wiki, Gentoo wiki, Debian wiki and Arch wiki all have articles related to multi-seat X. A number of other articles can be found on the Xorg wiki page on multiseat and even more can be found on google.
From what I can tell from these articles, there are two ways to do this:
  • Multiple X servers, or
  • Using Xephyr on top of Xorg.
Which of these methods will work for you will depend on the version of Xorg you are running and your hardware. Multiple X servers seem to be the easier route if your hardware setup supports it. There is also work to be done with the display manager, sound server, and other components -- much of which is covered in the various articles linked above.
Additionally, there is a multiseat display manager (MDM) to automate these configurations. It's not yet integrated in most distributions, though (the mdm package in Ubuntu is a homonym).
Read
0

File descriptors & shell scripting

I am having a very hard time understanding how does one use file descriptors in shell scripts.
I know the basics such as
exec 5 > /tmp/foo
So fd 5 is attached to foo for writing.
exec 6 < /tmp/bar
… for reading.
exec 5>&-
… close fd.
Now what does this do?
#!/bin/bash

exec 5 > /tmp/foo 
exec 6 < /tmp/bar 

cat <&6 | while read a
do
     echo $a >&5
done
As i understand &5 closes the fd, so how is the output still being re-directed successfully after each call?
This is a copy pasta from : Here
It claims using this over a simple echo $a > file would make it much quicker, I however fail to understand. I would appreciate any links to decent tutorial. I google powers seem to be failing me.

ANSWER:-

First, note that the syntax for closing is 5>&- or 6<&-, depending on whether the file descriptor is being read for writing or for reading. There seems to be a typo or formatting glitch in that blog post.
Here's the commented script.
exec 5>/tmp/foo       # open /tmp/foo for writing, on fd 5
exec 6</tmp/bar       # open /tmp/bar for reading, on fd 6
cat <&6 |             # call cat, with its standard input connected to
                      # what is currently fd 6, i.e. /tmp/bar
while read a; do      # 
  echo $a >&5         # write to fd 5, i.e. /tmp/foo
done                  # 
There's no closing here. Because all the inputs and outputs are going to the same place in this simple example, the use of extra file descriptors is not necessary. You could write
cat </tmp/bar |
while read a; do
  echo $a
done >/tmp/foo
Using explicit file descriptors becomes useful when you want to write to multiple files in turn. For example consider a script that outputs data to a data output file and logging data to a log file and possibly error messages as well. That means three output channels: one for data, one for logs and one for errors. Since there are only two standard descriptors for output, a third is needed. You can callexec to open the output files:
exec >data-file
exec 3>log-file
echo "first line of data"
echo "this is a log line" >&3

if something_bad_happens; then echo error message >&2; fi
exec &>-  # close the data output file
echo "output file closed" >&3
The remark about efficiency comes in when you have a redirection in a loop, like this (assume the file is empty to begin with):
while …; do echo $a >>/tmp/bar; done
At each iteration, the program opens /tmp/bar, seeks to the end of the file, appends some data and closes the file. It is more efficient to open the file once and for all:
while …; do echo $a; done >/tmp/bar
When there are multiple redirections happening at different times, calling exec to perform redirections rather than wrapping a block in a redirection becomes useful.
exec >/tmp/bar
while …; do echo $a; done
Read