Problem : Using “find” “exec” and “rm”

Problem : Using “find” “exec” and “rm”

This ia a variation of a question I posted and received one answer to earlier.  As I thought about it, I realize I left oout one very important piece of info from my original question *and* I’d like to get clear about the various components of the command. 500 points because I’d * really* like to know soon. Here goes …..

Logged in as root (this is the part I left out before — extrememly important info), I typed the following at the command prompt in a directory called /tmp/test and pressed :

find / \ -name foo -exec \ rm {} \;

As the command was running I got a lot of scary messages about “such and such is a directory”, etc.   I quickly pressed Cntl-C to stop it.

What I was trying to do was search the machine for all files named “foo” and delete them.  Does anyone know if the above may have deleted other things?


Solution: Using “find” “exec” and “rm”

Monkeybiz, sjm is right..  \{space}-name is different than \-name

The find command can take multiple “locations” to search in.
mkdir /tmp/a     /tmp/b
touch /tmp/a/1  /tmp/a/2  /tmp/b/1  /tmp/b/2
find /tmp/a /tmp/b  -name 1 -print    — will return:
/tmp/a/1
/tmp/b/1

but that one backslash,  find /tmp/a \ -name 1 -print — wil return
all the files in /tmp/a (1 and 2)  AND the following messags:
find: 0652-019 The status on  -name is not valid.
find: 0652-019 The status on 1 is not valid.
Because it’s trying to scan these two directories (-name  and 1)

Something mentioned in your other question that I think is of value here..
someone mentioned creating a list of files, and then running that through a script to remove the list.  Don’t over look the power of xargs.

By default in aix (at least V5), the command line is limited to roughly 20K bytes.  some directory lists can well exceed that.  Secondly, performance
find /tmp -name foo -exec rm \;   will fork the rm command for every file it finds
If this were to return 1000s of files you get rm forked 1000s of times.
but,
find /tmp -name foo |xargs rm
while getting the same effect, rm is forked with a long list of files, and thus system overhead is reduced, and the command runs faster, and system impact is reduced.
This is also important if you have  list of files produced by find {blah} > listfile
once you edit your list to remove those files you really want to keep, then
cat listfile|xargs rm
will remove all the files in the file.  While rm $(cat listfile) will fail if listfile > ~20K