Tag Archives: process

File Processing Systems

Even the earliest business computer systems were used to process business records and produce information. They were generally faster and more accurate than equivalent manual systems. These systems stored groups of records in separate files, and so they were called file processing systems. Although file processing systems are a great improvement over manual systems, they do have the following limitations:

Data is separated and isolated.

Data is often duplicated.

Application programs are dependent on file formats.

It is difficult to represent complex objects using file processing systems. Data is separate and isolated. Recall that as the marketing manager you needed to relate sales data to customer data. Somehow you need to extract data from both the CUSTOMER and ORDER files and combine it into a single file for processing. To do this, computer programmers determine which parts of each of the files are needed. Then they determine how the files are related to one another, and finally they coordinate the processing of the files so the correct data is extracted. This data is then used to produce the information. Imagine the problems of extracting data from ten or fifteen files instead of just two! Data is often duplicated. In the record club example, a member’s name, address, and membership number are stored in both files. Although this duplicate data wastes a small amount of file space, that is not the most serious problem with duplicate data. The major problem concerns data integrity. A collection of data has integrity if the data is logically consistent. This means, in part, that duplicated data items agree with one another. Poor data integrity often develops in file processing systems. If a member were to change his or her name or address, then all files containing that data need to be updated. The danger lies in the risk that all files might not be updated, causing discrepancies between the files. Data integrity problems are serious. If data items differ, inconsistent results will be produced. A report from one application might disagree with a report from another application. At least one of them will be incorrect, but who can tell which one? When this occurs, the credibility of the stored data comes into question. Application programs are dependent on file formats. In file processing systems, the physical formats of files and records are entered in the application programs that process the files. In COBOL, for example, file formats are written in the DATA DIVISION. The problem with this arrangement is that changes in file formats result in program updates. For example, if the Customer record were modified to expand the ZIP Code field from five to nine digits, all programs that use the Customer record need to be modified, even if they do not use the ZIP Code field. There might be twenty programs that process the CUSTOMER file. A change like this one means that a programmer needs to identify all the affected programs, then modify and retest them. This is both time consuming and error-prone. It is also very frustrating to have to modify programs that do not even use the field whose format changed. It is difficult to represent complex objects using file processing systems. This last weakness of file processing systems may seem a bit theoretical, but it is an important shortcoming.

Get Better Google Rankings and Increase Web Traffic With RSS Syndication (Page 1 of 2)

So maybe you have heard about Really Simple Syndication (RSS) and seen the RSS symbol on websites?

Perhaps you have even used it to catch up on your daily news from your favorite websites. Great free resources like the Google Reader makes it a breeze to keep up with the latest happenings on topics of interest to you.

But there is another side to RSS I would like to discuss with you today, and that’s RSS syndication also known as RSS submission which is the process of using this technology to get better page rankings in the search engines and increase web traffic to your website.

I am going to suggest to you that this is a vital process for you to understand and start using as the vast majority of your competition (other web sites) are not leveraging this technology. So using feed submission is a great way to assist you in leapfrogging your competition.

But lets take a step back and understand how the process works. Firstly in order to use RSS like this, you need an RSS feed.

Think of an RSS feed as just a description of one or more pages on a website. It’s a standard format that can be read by literally millions of programs on the internet.

You see computers being what they are, they cannot understand a particular piece of information unless they understand the structure. Because RSS is a documented format, it means all these programs that use RSS can then understand how to read an RSS feed, and how to process the contents in the file.

Many websites have this technology built in. If you are running a wordpress blog you automatically have this technology built in to every post you make on the website.

And even if your using a static website without RSS feeds, you can invest in an inexpensive RSS script to product RSS feeds for your website.

As I mentioned previously RSS technology has mainly been used to retrieve information for a group of websites (news, new content, etc). It has not really been used to assist you in getting visitors to your website or ranking better in google.

But new exciting software is coming out to leverage the power of RSS to get the word out about your website.

Entire websites are developed and being developed as RSS Aggregators, in other words sites dedicated to receiving information about the content that is on other websites. And guess what? They use RSS technology as well.

Google, the premier search engine itself invested million purchasing a major RSS website called feedburner. Further they then added to their AdSense program a module for RSS Feeds. AdSense is the most widely used technology to put advertisements on websites that we know of today.

I am hoping your starting to see where I am going with this, if entire sites are setting themselves up to use RSS exclusively and Google themselves are heavily investing in RSS technology, then this is something that internet marketers should be looking at.

Google realize that RSS is all about tracking content changes, because anytime a website adds content to their website (if they are using RSS technology) then their RSS feed is updated automatically.

File Processing Systems

Even the earliest business computer systems were used to process business records and produce information. They were generally faster and more accurate than equivalent manual systems. These systems stored groups of records in separate files, and so they were called file processing systems. Although file processing systems are a great improvement over manual systems, they do have the following limitations:

Data is separated and isolated.

Data is often duplicated.

Application programs are dependent on file formats.

It is difficult to represent complex objects using file processing systems. Data is separate and isolated. Recall that as the marketing manager you needed to relate sales data to customer data. Somehow you need to extract data from both the CUSTOMER and ORDER files and combine it into a single file for processing. To do this, computer programmers determine which parts of each of the files are needed. Then they determine how the files are related to one another, and finally they coordinate the processing of the files so the correct data is extracted. This data is then used to produce the information. Imagine the problems of extracting data from ten or fifteen files instead of just two! Data is often duplicated. In the record club example, a member’s name, address, and membership number are stored in both files. Although this duplicate data wastes a small amount of file space, that is not the most serious problem with duplicate data. The major problem concerns data integrity. A collection of data has integrity if the data is logically consistent. This means, in part, that duplicated data items agree with one another. Poor data integrity often develops in file processing systems. If a member were to change his or her name or address, then all files containing that data need to be updated. The danger lies in the risk that all files might not be updated, causing discrepancies between the files. Data integrity problems are serious. If data items differ, inconsistent results will be produced. A report from one application might disagree with a report from another application. At least one of them will be incorrect, but who can tell which one? When this occurs, the credibility of the stored data comes into question. Application programs are dependent on file formats. In file processing systems, the physical formats of files and records are entered in the application programs that process the files. In COBOL, for example, file formats are written in the DATA DIVISION. The problem with this arrangement is that changes in file formats result in program updates. For example, if the Customer record were modified to expand the ZIP Code field from five to nine digits, all programs that use the Customer record need to be modified, even if they do not use the ZIP Code field. There might be twenty programs that process the CUSTOMER file. A change like this one means that a programmer needs to identify all the affected programs, then modify and retest them. This is both time consuming and error-prone. It is also very frustrating to have to modify programs that do not even use the field whose format changed. It is difficult to represent complex objects using file processing systems. This last weakness of file processing systems may seem a bit theoretical, but it is an important shortcoming.