Wednesday
What happened to Yahoo!!
I tired it a lot of times but it always says "INTERNAL ERROR SERVICE CONNECTION TERMINATED" What does this mean????
Of course to find my answer again??I consulted my wed doctor, Dr. Google..see just a click of your mouse and you can always find answer in your question!!! remember what did I said in my header!! Just browse and you will find, anything that matters in this vast universe is at your hands!!! originally from Ruby Benz.....the WWW Addict!!!wheww!!
Dr. G directed me to yahoo website question and answer portion..IESCT would probably means that Yahoo is having problems with the server at this time and are working on getting it fixed. and yes, it was fixed..I just signed and and have it now!!
Thanks to Yahoo and Google for your services!! Your great!!
Friday
Users fight to save Windows XP
I am sharing this to all of you guys esp. to those who are using Windows XP..I am still using till now..I guess this is just a very important info..here it is!!
Users Fight to Save Windows XP
By JESSICA MINTZ, AP Technology Writer Mon Apr 14, 7:49 AM ET
SEATTLE - Microsoft Corp.'s operating systems run most personal computers around the globe and are a cash cow for the world's largest software maker. But you'd never confuse a Windows user with the passionate fans of Mac OS X or even the free Linux operating system. Unless it's someone running Windows XP, a version Microsoft wants to retire.
Fans of the six-year-old operating system set to be pulled off store shelves in June have papered the Internet with blog posts, cartoons and petitions recently. They trumpet its superiority to Windows Vista, Microsoft's latest PC operating system, whose consumer launch last January was greeted with lukewarm reviews.
No matter how hard Microsoft works to persuade people to embrace Vista, some just can't be wowed. They complain about Vista's hefty hardware requirements, its less-than-peppy performance, occasional incompatibility with other programs and devices and frequent, irritating security pop-up windows.
For them, the impending disappearance of XP computers from retailers, and the phased withdrawal of technical support in coming years, is causing a minor panic.
Take, for instance, Galen Gruman. A longtime technology journalist, Gruman is more accustomed to writing about trends than starting them.
But after talking to Windows users for months, he realized his distaste for Vista and strong attachment to XP were widespread.
"It sort of hit us that, wait a minute, XP will be gone as of June 30. What are we going to do?" he said. "If no one does something, it's going to be gone."
So Gruman started a Save XP Web petition, gathering since January more than 100,000 signatures and thousands of comments, mostly from die-hard XP users who want Microsoft to keep selling it until the next version of Windows is released, currently targeted for 2010.
On the petition site's comments section, some users proclaimed they will downgrade from Vista to XP — an option available in the past to businesses, but now open for the first time to consumers who buy Vista Ultimate or Business editions — if they need to buy a new computer after XP goes off the market.
Others used the comments section to rail against the very idea that Microsoft has the power to enforce the phase-out from a stable, decent product to one that many consider worse, while profiting from the move. Many threatened to leave Windows for Apple or Linux machines.
Microsoft already extended the XP deadline once, but it shows no signs it will do so again. The company has declined to meet with Gruman to consider the petition. Microsoft is aware of the petition, it said in a statement to The Associated Press, and "will continue to be guided by feedback we hear from partners and customers about what makes sense based on their needs."
Gruman said he'd keep pressing for a meeting.
"They really believe if they just close their eyes, people will have no choice," he said.
In fact, most people who get a new computer will end up with Vista. In 2008, 94 percent of new Windows machines for consumers worldwide will run Vista, forecasts industry research group IDC. For businesses, about 75 percent of new PCs will have Vista. (That figure takes into account companies that choose to downgrade to XP.)
Although Microsoft may not budge on selling new copies of XP, it may have to extend support for it.
Al Gillen, an IDC analyst, estimated that at the end of 2008 nearly 60 percent of consumer PCs and almost 70 percent of business PCs worldwide will still run XP. Microsoft plans to end full support — including warranty claims and free help with problems — in April 2009. The company will continue providing a more limited level of service until April 2014.
Gillen said efforts like Gruman's grass-roots petition may not influence the software maker, but business customers' demands should carry more clout.
"You really can't make 69 percent of your installed base unhappy with you," he said.
Some companies — such as Wells Manufacturing Co. in Woodstock, Ill. — are crossing their fingers that he's right. The company, which melts scrap steel and casts iron bars, has 200 PCs that run Windows 2000 or XP. (Windows 2000 is no longer sold on PCs. Mainstream support has ended, but limited support is available through the middle of 2010.)
Wells usually replaces 50 of its PCs every 18 months. In the most recent round of purchases, Chief Information Officer Lou Peterhans said, the company stuck with XP because several of its applications don't run well on Vista.
"There is no strong reason to go to Vista, other than eventually losing support for XP," he said. Peterhans added that the company isn't planning to bring in Vista computers for 18 months to two years. If Microsoft keeps to its current timetable, its next operating system, code-named Windows 7, will be on the market by then.
___
source: yahoo news
On the Net:
Save XP Petition: http://weblog.infoworld.com/save-xp/
Microsoft's Windows support timeline: http://support.microsoft.com/gp/lifepolicy
Monday
PageRank
PageRank was developed at Stanford University by Larry Page (hence the name Page-Rank[3]) and later Sergey Brin as part of a research project about a new kind of search engine. The project started in 1995 and led to a functional prototype, named Google, in 1998. Shortly after, Page and Brin founded Google Inc., the company behind the Google search engine. While just one of many factors which determine the ranking of Google search results, PageRank continues to provide the basis for all of Google's web search tools.[1]
PageRank is based on citation analysis that was developed in the 1950s by Eugene Garfield at the University of Pennsylvania. Google's founders cite Garfield's work in their original paper. In this way virtual communities of webpages are found. Teoma's search technology uses a communities approach in its ranking algorithm. NEC Research Institute has worked on similar technology. Web link analysis was first developed by Jon Kleinberg and his team while working on the CLEVER project at IBM's Almaden Research Center.
Algorithm
PageRank is a probability distribution used to represent the likelihood that a person randomly clicking on links will arrive at any particular page. PageRank can be calculated for any-size collection of documents. It is assumed in several research papers that the distribution is evenly divided between all documents in the collection at the beginning of the computational process. The PageRank computations require several passes, called "iterations", through the collection to adjust approximate PageRank values to more closely reflect the theoretical true value.
A probability is expressed as a numeric value between 0 and 1. A 0.5 probability is commonly expressed as a "50% chance" of something happening. Hence, a PageRank of 0.5 means there is a 50% chance that a person clicking on a random link will be directed to the document with the 0.5 PageRank.
Simplified algorithm
Assume a small universe of four web pages: A, B, C and D. The initial approximation of PageRank would be evenly divided between these four documents. Hence, each document would begin with an estimated PageRank of 0.25.
In the original form of PageRank initial values were simply 1. This meant that the sum of all pages was the total number of pages on the web. Later versions of PageRank (see the below formulas) would assume a probability distribution between 0 and 1. Here we're going to simply use a probability distribution hence the initial value of 0.25.
If pages B, C, and D each only link to A, they would each confer 0.25 PageRank to A. All PageRank PR( ) in this simplistic system would thus gather to A because all links would be pointing to A.
But then suppose page B also has a link to page C, and page D has links to all three pages. The value of the link-votes is divided among all the outbound links on a page. Thus, page B gives a vote worth 0.125 to page A and a vote worth 0.125 to page C. Only one third of D's PageRank is counted for A's PageRank (approximately 0.083).
In other words, the PageRank conferred by an outbound link L( ) is equal to the document's own PageRank score divided by the normalized number of outbound links (it is assumed that links to specific URLs only count once per document).
In the general case, the PageRank value for any page u can be expressed as:
,
i.e. the PageRank value for a page u is dependent on the PageRank values for each page v out of the set Bu (this set contains all pages linking to page u), divided by the number L(v) of links from page v.
Damping factor
The PageRank theory holds that even an imaginary surfer who is randomly clicking on links will eventually stop clicking. The probability, at any step, that the person will continue is a damping factor d. Various studies have tested different damping factors, but it is generally assumed that the damping factor will be set around 0.85.[4]
The damping factor is subtracted from 1 (and in some variations of the algorithm, the result is divided by the number of documents in the collection) and this term is then added to the product of the damping factor and the sum of the incoming PageRank scores.
That is,
or (N = the number of documents in collection)
So any page's PageRank is derived in large part from the PageRanks of other pages. The damping factor adjusts the derived value downward. The second formula above supports the original statement in Page and Brin's paper that "the sum of all PageRanks is one".[2] Unfortunately, however, Page and Brin gave the first formula, which has led to some confusion.
Google recalculates PageRank scores each time it crawls the Web and rebuilds its index. As Google increases the number of documents in its collection, the initial approximation of PageRank decreases for all documents.
The formula uses a model of a random surfer who gets bored after several clicks and switches to a random page. The PageRank value of a page reflects the chance that the random surfer will land on that page by clicking on a link. It can be understood as a Markov chain in which the states are pages, and the transitions are all equally probable and are the links between pages.
If a page has no links to other pages, it becomes a sink and therefore terminates the random surfing process. However, the solution is quite simple. If the random surfer arrives at a sink page, it picks another URL at random and continues surfing again.
When calculating PageRank, pages with no outbound links are assumed to link out to all other pages in the collection. Their PageRank scores are therefore divided evenly among all other pages. In other words, to be fair with pages that are not sinks, these random transitions are added to all nodes in the Web, with a residual probability of usually d = 0.85, estimated from the frequency that an average surfer uses his or her browser's bookmark feature.
So, the equation is as follows:
where p1,p2,...,pN are the pages under consideration, M(pi) is the set of pages that link to pi, L(pj) is the number of outbound links on page pj, and N is the total number of pages.
The PageRank values are the entries of the dominant eigenvector of the modified adjacency matrix. This makes PageRank a particularly elegant metric: the eigenvector is
where R is the solution of the equation
where the adjacency function is 0 if page pj does not link to pi, and normalised such that, for each j
i.e. the elements of each column sum up to 1.
This is a variant of the eigenvector centrality measure used commonly in network analysis.
The values of the PageRank eigenvector are fast to approximate (only a few iterations are needed) and in practice it gives good results.
As a result of Markov theory, it can be shown that the PageRank of a page is the probability of being at that page after lots of clicks. This happens to equal t − 1 where t is the expectation of the number of clicks (or random jumps) required to get from the page back to itself.
The main disadvantage is that it favors older pages, because a new page, even a very good one, will not have many links unless it is part of an existing site (a site being a densely connected set of pages, such as Wikipedia). The Google Directory (itself a derivative of the Open Directory Project) allows users to see results sorted by PageRank within categories. The Google Directory is the only service offered by Google where PageRank directly determines display order. In Google's other search services (such as its primary Web search) PageRank is used to weight the relevance scores of pages shown in search results.
Several strategies have been proposed to accelerate the computation of PageRank.[5]
Various strategies to manipulate PageRank have been employed in concerted efforts to improve search results rankings and monetize advertising links. These strategies have severely impacted the reliability of the PageRank concept, which seeks to determine which documents are actually highly valued by the Web community.
Google is known to actively penalize link farms and other schemes designed to artificially inflate PageRank. In December 2007 Google started actively penalizing sites selling paid text links. How Google identifies link farms and other PageRank manipulation tools are among Google's trade secrets.
source: Wikipedia.http://en.wikipedia.org/wiki/PageRankHTML Editor
An HTML editor is a software application for creating web pages. Although the HTML markup of a web page can be written with any text editor, specialized HTML editors can offer convenience and added functionality. For example, many HTML editors work not only with HTML, but also with related technologies such as CSS, XML and JavaScript or ECMAScript. In some cases they also manage communication with remote web servers via FTP and WebDAV, and version management systems such as CVS or Subversion.
There are various forms of HTML editors: text, object and WYSIWYG (What You See Is What You Get) editors.
Text (source) editors intended for use with HTML usually provide syntax highlighting. Templates, toolbars and keyboard shortcuts may quickly insert common HTML elements and structures. Wizards, tooltip prompts and auto-completion may help with common tasks.
Text HTML editors commonly include either built-in functions or integration with external tools for such tasks as source and version control, link-checking, code checking and validation, code cleanup and formatting, spell-checking, uploading by FTP or WebDAV, and structuring as a project.
Text editors require user understanding of HTML and any other web technologies the designer wishes to use like CSS, JavaScript and server-side scripting languages.
Object editors
Some editors allow alternate editing of the source text of objects in more visually organized modes than simple color highlighting, but in modes not considered WYSIWYG. Some WYSIWYG editors include the option of using palette windows that enable editing the text-based parameters of selected objects. These palettes allow either editing parameters in fields for each individual parameter, or text windows to edit the full group of source text for the selected object. They may include widgets to present and select options when editing parameters. Adobe GoLive provides an outline editor to expand and collapse HTML objects and properties, edit parameters, and view graphics attached to the expanded objects.
WYSIWYG HTML editors
WYSIWYG HTML editors provide an editing interface which resembles how the page will be displayed in a web browser. Some editors, such as ones in the form of browser extensions allow editing within a web browser. Because using a WYSIWYG editor does not require any HTML knowledge, they are easier for an average computer user to get started with.
The WYSIWYG view is achieved by embedding a layout engine based upon that used in a web browser. The layout engine will have been considerably enhanced by the editor's developers to allow for typing, pasting, deleting and moving the content. The goal is that, at all times during editing, the rendered result should represent what will be seen later in a typical web browser.
While WYSIWYG editors make web design faster and easier; many professionals still use text editors, despite the fact that most WYSIWYG editors have a mode to edit HTML code by hand. The web was not originally designed to be a visual medium, and attempts to give authors more layout control, such as css, have been poorly supported by major web browsers. Because of this, code automatically generated by WYSIWYG editors frequently sacrifice file size and compatibility with fringe browsers, to create a design that looks the same for widely used desktop web browsers. This automatically generated code may be edited and corrected by hand.
WYSIWYM editors
What You See Is What You Mean (WYSIWYM) is an alternative paradigm to the WYSIWYG editors above. Instead of focusing on the format or presentation of the document, it preserves the intended meaning of each element. For example, page headers, sections, paragraphs, etc. are labeled as such in the editing program, and displayed appropriately in the browser.
Friday
Internet Compared to WWW
The Internet, sometimes called the "Information Superhighway", is a worldwide, publicly accessible series of interconnected computer networks that transmit data by packet switching using the standard Internet Protocol (IP). It is a "network of networks" that consists of millions of smaller domestic, academic, business, and government networks, which together carry various information and services, such as electronic mail, online chat, file transfer, and the interlinked web pages and other resources of the World Wide Web (WWW).
TERMINOLOGY::
The Internet and the World Wide Web are not synonymous. The Internet is a collection of interconnected computer networks, linked by copper wires, fiber-optic cables, wireless connections, etc. In contrast, the Web is a collection of interconnected documents and other resources, linked by hyperlinks and URLs. The World Wide Web is one of the services accessible via the Internet, along with various others including e-mail, file sharing, online gaming and others described below.