7/27/2010

c语言中for循环和while循环效率比较试验

有道面试题:

for(i = 0; i < 100; i++)

{

XXX;

}

题目要求你不去考虑循环体的代码XXX,从for循环的特点去入手提高程序执行的效率.

据说标准答案是这样的:

由于c语言中i = 0; 只运行一次,也没有办法优化;

i++对于大多数编译器来说已经是最简练的代码了;

i < 100 会有优化的余地,即将100用#define N 100 来定义,然后将代码改为 i < N

这样对于超大规模的数据运算会有效率上的帮助.

---------------------------------------------------------------------------

下面是我针对这种题目做的一个实验:

分别有普通的for循环,预定义N值的for循环,逆序的for循环(for(i = N; i > 0; i++));和对应的while循环.

循环体中的代码分别为空循环,求和,赋值.

用clock_t来计时,并没用转换为标准的时分秒.

clcok_t是CPU时钟计时单元.

实验结果:
空循环continue;实验

for(i = 0; i < 100000000; i++)

239  238  238  240  238  242  240  240  241  239  2395

for(i = 0; i < N; i++)

241  238  239  238  239  240  237  239  241  239  2391

for(i = N; i > 0; i--)

240  238  239  238  239  237  238  238  238  239  2384

i = 0;    while( i++ < 100000000)

247  244  243  245  243  246  247  247  243  243  2448

i = 0;    while( i++ < 100000000);

247  243  243  244  241  245  244  244  245  244  2440

i = 0;    while( i++ < N)

245  245  243  245  243  245  244  245  241  244  2440

i = 0;    while( i++ < N);

244  242  245  245  244  247  244  246  243  244  2444

i = 100000000;  while( i-- > 0)

246  245  243  243  244  244  244  245  246  245  2445

i = 100000000;  while( i-- > 0);

246  244  244  241  243  244  242  244  246  244  2438

i = N;    while(i-- > 0)

244  246  245  246  244  244  245  246  245  248  2453

i = N;    while(i-- > 0);

273  245  245  244  246  244  246  245  245  245  2478

========================================================

sum += i;实验

for(i = 0; i < 100000000; i++)

560  560  559  558  558  558  557  558  559  559  5586

for(i = 0; i < N; i++)

560  559  558  557  556  556  558  559  558  557  5578

for(i = N; i > 0; i--)

559  559  559  559  561  559  560  561  558  559  5594

558  560  558  558  560  557  558  558  557  557  5581

559  559  558  558  559  557  560  560  559  558  5587

i = 0;    while( i++ < 100000000)

512  511  511  512  509  512  511  509  510  511  5108

i = 0;    while( i++ < N)

512  512  511  511  511  511  510  512  513  512  5115

512  512  512  510  512  510  512  512  510  510  5112

i = 100000000;  while( i-- > 0)

526  525  526  524  526  526  525  525  524  525  5252

i = N;    while(i-- > 0)

525  525  525  524  525  524  525  525  524  524  5246

525  525  526  525  524  525  526  524  525  524  5249

========================================================

sum = i;实验

for(i = 0; i < 100000000; i++)

360  363  362  361  361  361  360  361  360  360  3609

for(i = 0; i < N; i++)

362  361  360  361  360  361  361  361  361  361  3609

for(i = N; i > 0; i--)

362  362  361  362  362  362  360  361  361  361  3614

i = 0;    while( i++ < 100000000)

258  257  257  257  256  259  257  258  257  258  2574

i = 0;    while( i++ < N)

257  257  258  257  257  257  257  257  257  257  2571

i = 100000000;  while( i-- > 0)

313  313  312  313  313  311  313  311  313  312  3124

i = N;    while(i-- > 0)

312  311  313  310  312  312  313  312  310  312  3117

end

---------------------------------------------------------------------

当然这个实验是相当片面的,实际运行时间会受到机器配置,语言编译器等诸多因素的影响.

不过从实验结果可以看出这几点:

1.预定义并没有明显地提高效率;

2.i > 0 和 i < n 的效率相差的并不太多,当然需要进一步的分析在汇编乃至机器码中是如何实现大小的比较才能得到比较靠谱的结论;

3.某些情况下while的效率要明显好于for循环;

权当参考.

7/15/2010

【转载】ACL 2010 Best Paper Awards

ACL 2010官方主页似乎在前几天已经确定好了本次大会的Best Paper Awards,在其Awards页面里,不仅给出了本次大会的Best long paper, Best short paper, IBM Best student paper,而且包括其在会议期间Presented time.

Best long paper
Beyond NomBank: A Study of Implicit Arguments for Nominal Predicates
Matthew Gerber and Joyce Chai
Despite its substantial coverage, NomBank does not account for all within-sentence arguments and ignores extrasentential arguments altogether. These arguments, which we call implicit, are important to semantic processing, and their recovery could potentially benefit many NLP applications. We present a study of implicit arguments for a select group of frequent nominal predicates. We show that implicit arguments are pervasive for these predicates, adding 65% to the coverage of NomBank. We demonstrate the feasibility of recovering implicit arguments with a supervised classification model. Our results and analyses provide a baseline for future work on this emerging task.

Best short paper
SVD and Clustering for Unsupervised POS Tagging
Michael Lamar, Yariv Maron, Mark Johnson, Elie Bienenstock
We revisit the algorithm of Schütze (1995) for unsupervised part-of-speech tagging. The algorithm uses reduced-rank singular value decomposition followed by clustering to extract latent features from context distributions. As implemented here, it achieves state-of-the-art tagging accuracy at considerably less cost than more recent methods. It can also produce a range of finer-grained taggings, with potential applications to various tasks.

IBM Best student paper
Extracting Social Networks from Literary Fiction
David Elson, Nicholas Dames, Kathleen McKeown
(注:该文也是一篇long paper,作者是学生)
We present a method for extracting social networks from literature, namely, nineteenth-century British novels and serials. We derive the networks from dialogue interactions, and thus our method depends on the ability to determine when two characters are in conversation. Our approach involves character name chunking, quoted speech attribution and conversation detection given the set of quotes. We extract features from the social networks and examine their correlation with one another, as well as with metadata such as the novel’s setting. Our results provide evidence that the majority of novels in this time period do not fit two characterizations provided by literacy scholars. Instead, our results suggest an alternative explanation for differences in social networks.

Best Paper Awards是由ACL的一个专门委员会评选出的,将在大会结束时进行颁奖。ACL 2010还有一个”Lifetime Achievement Award(终生成就奖)“,不过目前还没有揭晓获奖者。关于这个奖项,ACL 2010给了一段很有意思的介绍:
The ACL Lifetime Achievement Award (LTA) was instituted on the occasion of the Association’s 40th anniversary meeting. The award is presented for scientific achievement, of both theoretical and applied nature, in the field of Computational Linguistics. Currently, an ACL committee nominates and selects at most one award recipient annually, considering the originality, depth, breadth, and impact of the entire body of the nominee’s work in the field. The award is a crystal trophy and the recipient is invited to give a 45-minute speech on his or her view of the development of Computational Linguistics at the annual meeting of the association. As of 2004, the speech has been subsequently published in the Association’s journal, Computational Linguistics. The speech is introduced by the announcement of the award winner, whose identity is not made public until that time.

Lifetime Achievement Award(终生成就奖)每届最多只授予一位对于自然语言处理与计算语言学有着举足轻重影响的候选者,此前获得该奖项的分别是:Aravind Joshi (2002), Makoto Nagao (2003), Karen Spärck Jones (2004), Martin Kay (2005), Eva Hajicová (2006), Lauri Karttunen (2007), Yorick Wilks (2008) and Fred Jelinek (2009).

注:转载请注明出处“我爱自然语言处理”:www.52nlp.cn

7/14/2010

windows系统自动登录后锁定

为什么要这么干?

有些时候一些程序需要用户登录才能启动,譬如tor、电驴。。。而设置为系统服务的ftp、php服务器则不需要。

怎么做?

1自动登录 首先配置自动登录:开始-运行-“control userpasswords2”,把“要使用本机,用户必须输入用户名和密码(E)”勾掉,然后确定,输入需要登录的账户和秘密。

//Windows 8上的命令是:netplwiz    添加于 2013-03-29

1.1注意! 有时候还需要这样做:控制面板-管理工具-本地安全策略-本地策略-安全选项-恢复控制台:允许自动管理登录  修改为“已启用”。 这时候自动登录搞定了。

2锁定 下一步是锁定了,代码是“%windir%system32rundll32.exe user32.dll,LockWorkStation” 可以建立快捷方式、bat文件、vbs文件。。。看你的喜好了。

2.1自动锁定 可以直接将实现锁定的文件拖到开始菜单-启动里。或者改注册表。。。

powered by xcv58

7/11/2010

转载:ACL 2010文章已可下载


晚上收到ACL Anthology负责人Min-Yen Kan发给ACL Anthology Google Group的邮件,通知说目前ACL 2010的文章已经可以下载,包括full papers, short papers, student research workshop papers, demonstrations, tutorial abstracts以及所有的workshops的Paper,才想起今天(7月11号)ACL 2010会议召开。以下是具体的下载地址,有兴趣的读者可以关注一下。


一、ACL 2010大会论文集:
Proceedings of the ACL 2010 conference can be found here:
http://www.aclweb.org/anthology/P/P10/
These include both volumes: (I) full papers and (II) short papers,student research workshop papers, demonstrations and tutorial abstracts.


二、Workshop论文集:
The proceedings of the workshops and conferences co-located with ACL 2010 are now online.
http://www.aclweb.org/anthology/W/W10/
(scroll towards the bottom of the table of contents)


* Joint Fifth Workshop on Statistical Machine Translation and MetricsMATR
* Fourth Linguistic Annotation Workshop
* 2010 Workshop on Biomedical Natural Language Processing
* 2010 Workshop on Cognitive Modeling and Computational Linguistics
* 2010 Workshop on NLP and Linguistics: Finding the Common Ground
* 11th Meeting of the ACL Special Interest Group on Computational Morphology and Phonology
* TextGraphs-5 – 2010 Workshop on Graph-based Methods for Natural Language Processing
* 2010 Named Entities Workshop
* 2010 Workshop on Applications of Tree Automata in Natural Language Processing
* 2010 Workshop on Domain Adaptation for Natural Language Processing
* 2010 Workshop on Companionable Dialogue Systems
* 2010 Workshop on GEometrical Models of Natural Language Semantics


注:转载请注明出处“我爱自然语言处理”:www.52nlp.cn


 

翻墙文章技术类

Chipping Away at Censorship Firewalls with User-Generated Content