cache words storage mode is able to index and search quickly through several millions of documents.
The main idea of cache storage mode is that word index and URLs sorting information is stored on disk rather than in SQL database. Full URL information however is kept in SQL database (tables url and urlinfo). Word index is divided into number of files specified by WrdFiles command (default value is 0x300). URLs sorting information is divided into number of files specified by URLDataFiles command (default value is 0x300).
Note: Beware: you should have identical values for WrdFiles and URLDataFiles commands in all your configs.
Word index is located in files under /var/tree directory of DataparkSearch installation. URLs sorting information is located in files under /var/url directory of DataparkSearch installation.
There are two additional programs cached and splitter used in cache mode indexing.
cached is a TCP daemon which collects word information from indexers and stores it on your hard disk. It can operate in two modes, as old cachelogd daemon to logs data only, and in new mode, when cachelogd and splitter functionality are combined.
splitter is a program to create fast word indexes using data collected by cached. Those indexes are used later in search process.
To start "cache mode" follow these steps:
Start cached server:
cd /usr/local/dpsearch/sbin
./cached 2>cached.out &
It will write some debug information into cached.out file. cached also creates a cached.pid file in /var directory of base DataparkSearch installation.
cached listens to TCP connections and can accept several indexers from different machines. Theoretical number of indexers connections is equal to 128. In old mode cached stores information sent by indexers in /var/splitter/ directory of DataparkSearch installation. In new mode it stores in /var/tree/ directory.
By default, cached starts in new mode. To run it in old mode, i.e. logs only mode, run it with -l switch:
cached -l
Or by specify LogsOnly yes command in your cached.conf.
You can specify port for cached to use without recompiling. In order to do that, please run
./cached -p8000
where 8000 is the port number you choose.
You can as well specify a directory to store data (it is /var directory by default) with this command:
./cached -w /path/to/var/dir
Configure your indexer.conf as usual and for DBAddr command add cache as value of dbmode parameter and localhost:7000 as value of cached parameter (see Section 3.10.2>).
Run indexers. Several indexers can be executed simultaneously. Note that you may install indexers on different machines and then execute them with the same cached server. This distributed system allows making indexing faster.
Flushing cached buffers and url data, and creating cache mode limits. To flush cached buffers and url data and to create cache mode limits after indexing is done, send -HUP signal to cached. You can use cached.pid file to do this:
kill -HUP `cat /usr/local/dpsearch/var/cached.pid`
N.B.: you needs wait till all buffers will be flushed before going to next step.
Creating word index. This stage is no needs, if cached runs in new, i.e. combined, mode. When some information is gathered by indexers and collected in /var/splitter/ directory by cached it is possible to create fast word indexes. splitter program is responsible for this. It is installed in /sbin directory. Note that indexes can be created anytime without interrupting current indexing process.
Run splitter without any arguments:
/usr/local/dpsearch/sbin/splitter
It will take sequentially all prepared files in /var/splitter/ directory and use them to build fast word index. Processed logs in /var/splitter/ directory are truncated after this operation.
splitter has two command line arguments: -f [first file] -t [second file] which allows limiting used files range. If no parameters are specified splitter distributes all prepared files. You can limit files range using -f and -t keys specifying parameters in HEX notation. For example, splitter -f 000 -t A00 will create word indexes using files in the range from 000 to A00. These keys allow using several splitters at the same time. It usually gives more quick indexes building. For example, this shell script starts four splitters in background:
#!/bin/sh splitter -f 000 -t 3f0 & splitter -f 400 -t 7f0 & splitter -f 800 -t bf0 & splitter -f c00 -t ff0 &
There is a run-splitter script in /sbin directory of DataparkSearch installation. It helps to execute subsequently all three indexes building steps.
"run-splitter" has these two command line parameters:
run-splitter --hup --split
or a short version:
run-splitter -k -s
Each parameter activates corresponding indexes building step. run-splitter executes all three steps of index building in proper order:
Sending -HUP signal to cached. --hup (or -k) run-splitter arguments are responsible for this.
Running splitter. Keys --split (or -s).
In most cases just run run-splitter script with all -k -s arguments. Separate usage of those three flags which correspond to three steps of indexes building is rarely required.
run-splitter have optional parameters: -p=n and -v=m to specify pause in seconds after each log buffer update and verbose level respectively. n is seconds number (default value: 0), m is verbosity level (default value: 4).
To start using search.cgi in the "cache mode", edit as usually your search.htm template and add the "cache" as value of dbmode parameter of DBAddr command.
To use search limits in cache mode, you should add appropriate Limit command(s) to your indexer.conf (or cached.conf, if cached is used) and to search.htm or searchd.conf (if searchd is used).
To use, for example, search limit by tag, by category and by site, add follow lines to search.htm or to indexer.conf (searchd.conf, if searchd is used).
Limit t:tag Limit c:category Limit site:siteid
where t - name of CGI parameter (&t=) for this constraint, tag - type of constraint.
Instead of tag/category/siteid in example above you can use any of values from table below:
Table 5-1. Cache limit types
category | Category limit. |
tag | Tag limit. |
time | Time limit (a hour precision). |
language | Language limit. |
content | Content-Type limit. |
siteid | url.site_id limit. |
link | Limit by pages what links to url.rec_id specified. |
hostname (obsolete) | Hostname (url) limit. This limit is obsolete and should be replaced by site_id limit |