Dobry den.
Snazim sa napisat si php crawler ale popravde zasekol som sa a potrebujem trosku posunut.
Funguje to nasledovne - crawler.php si zavola data s DB ak je status = 0 snazi sa naindexovat stranku.
pozbiera vsetky url na stranke zaindexuje a uklada do DB
<?php
ini_set("display_errors", "on");
$dir=realpath(dirname(__FILE__));
include($dir."/../inc/db.php");
function shutdown(){
global $dir;
$error = error_get_last();
if($error !== NULL && $error['type'] === E_ERROR) {
}
}
set_time_limit(0);
register_shutdown_function('shutdown');
include($dir."/PHPCrawl/libs/PHPCrawler.class.php");
include($dir."/simple_html_dom.php");
function addURL($t, $u, $d){
global $dbh;
if($t!="" && filter_var($u, FILTER_VALIDATE_URL)){
$check=$dbh->prepare("SELECT `id` FROM `search` WHERE `url`=?");
$check->execute(array($u));
$t=preg_replace("/\s+/", " ", $t);
$t=substr($t, 0, 1)==" " ? substr_replace($t, "", 0, 1):$t;
$t=substr($t, -1)==" " ? substr_replace($t, "", -1, 1):$t;
$t=html_entity_decode($t, ENT_QUOTES);
$d=html_entity_decode($d, ENT_QUOTES);
echo $u."<br/>\n";
ob_flush();
flush();
if($check->rowCount()==0){
$maxid = $dbh->query("SELECT COUNT(*) FROM search");
$lastid = $maxid->fetchColumn();
$insertid = $lastid+1;
$sql=$dbh->prepare("INSERT INTO `search` (`id`,`title`, `url`, `description`, `status`) VALUES (?, ?, ?, ?, ?)");
$sql->execute(array(
$insertid,
$t,
$u,
$d,
1
));
}else{
$sql=$dbh->prepare("UPDATE `search` SET `description` = ?, `title` = ?, `status` = ? WHERE `url`=?");
$sql->execute(array(
$d,
$t,
1,
$u
));
}
}
}
class WSCrawler extends PHPCrawler {
function handleDocumentInfo(PHPCrawlerDocumentInfo $p){
$u=$p->url;
$c=$p->http_status_code;
$s=$p->source;
if($c==200 && $s!=""){
$html = str_get_html($s);
if(is_object($html)){
$d="";
$do=$html->find("meta[name=description]", 0);
if($do){
$d=$do->content;
}
$t=$html->find("title", 0);
if($t){
$t=$t->innertext;
addURL($t, $u, $d);
}
$html->clear();
unset($html);
}
}
}
}
function crawl($u){
$C = new WSCrawler();
$C->setURL($u);
$C->addContentTypeReceiveRule("#text/html#");
$C->addURLFilterRule("#(jpg|gif|png|pdf|jpeg|svg|css|js)$# i");
if(!isset($GLOBALS['bgFull'])){
$C->setTrafficLimit(2000 * 1024);
}
$C->obeyRobotsTxt(true);
$C->obeyNoFollowTags(true);
$C->setUserAgentString("King Search Engine");
$C->setFollowMode(1);
$C->go();
}
if(!isset($url1Array)){
// Get the last indexed URLs (If there isn't, use default URL's) & start Crawling
$last=$dbh->query("SELECT `url` FROM search WHERE status = '0'");
$count=$last->rowCount();
if($count < 1){
echo "No URL for crawling";
}else{
$urls=$last->fetchAll();
$index=rand(0, $count-1);
crawl($urls[$index]['url']);
echo "Add!";
}
}elseif(is_array($url1Array)){
foreach($url1Array as $url){
crawl($url);
}
}
?>
Mam 2 problemy
1. Script zavola napriklad prvu url
www.root.cz kde je status 0 a zacne s nou pracovat. Bohuzial mi s pre mna neznameho dvovodu nechce nastavit Status - 1 na stranku s ktorou zacal napriklad
www.root.cz - nacita url naindexuje pouklada do DB - nove zaznami maju status 1 ale root.cz nikdy. Nenasiel som riesenie
2. Ak mam stranku kde je xyz odkazov script jednoducho spadne na time-limit a zacne pekne od znova. Ako vyriesit tento problem tak aby script napriklad po 30sec zastavil - "refresh" - a pokracoval tam kde skoncil ?
Cele by to malo fungovat tak ze cron si kazdych 15 minut zavola script ten skontroluje co ma ako status - 0 a naindexuje.
Dakujem za kazdu radu