命令如下:
复制代码
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24Administrator@f523540 ~ $ cd /cygdrive/d/nutch/apache-nutch-1.4-bin/runtime/local/ Administrator@f523540 /cygdrive/d/nutch/apache-nutch-1.4-bin/runtime/local $ ./bin/nutch crawl urls -dir crawl -topN 5 -depth 3 cygpath: can't convert empty path solrUrl is not set, indexing will be skipped... crawl started in: crawl rootUrlDir = urls threads = 10 depth = 3 solrUrl=null topN = 5 Injector: starting at 2012-06-17 13:47:45 Injector: crawlDb: crawl/crawldb Injector: urlDir: urls Injector: Converting injected urls to crawl db entries. Exception in thread "main" java.io.IOException: Job failed! at org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:1252) at org.apache.nutch.crawl.Injector.inject(Injector.java:217) at org.apache.nutch.crawl.Crawl.run(Crawl.java:127) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65) at org.apache.nutch.crawl.Crawl.main(Crawl.java:55)
环境:cygwin windows xp java 1.6 nutch 1.4。不知道哪位有没有遇到过此问题,期待您的回答!
最后
以上就是刻苦海燕最近收集整理的关于Nutch 1.4 运行爬虫索引网站时报错。的全部内容,更多相关Nutch内容请搜索靠谱客的其他文章。
本图文内容来源于网友提供,作为学习参考使用,或来自网络收集整理,版权属于原作者所有。
发表评论 取消回复