
- 0 / 0 / 653 entry
- 171 başlık
- 0.00 incipuan
wozniackihadi bebişinci nesil normal
-
0
ingilizcesi çok iyi olan baksınn
beyler bunun gibi 20 tane sayfayı türkçeye çevirmem lazım diğerlerini büyük ölçüde hallettim fakat yetişmeyecek gibi bunu çevirecek olursa valla süper olur şuku nickaltı ne isterseniz yardım edin panpanıza
the data miner workspace that has been added to this duplicate training data
set and the classification tree algorithm now looks like figure 5.14 after running this
algorithm.
and the classification tree results, obtained from this results icon in the green panel on
the right, and illustrated in figure 5.15, shows in this simple example that only one variable
could be used to determine if a particular credit transaction was advisable. (of course, it is
more complicated than this, and on the dvd accompanying this book is a complete credit
scoring tutorial to work through, to see further details, and discover how accurate this one
variable is in making a good judgment about any one applicant for credit.)
however, you can make all kinds of variations with the flexibility and customization
available in this software, so one will be demonstrated in an example using a computer
chip/wafer chip manufacturing data set consisting of 2858 variables and 2062 cases. naturally,
2858 variables are too many to keep track of for quality control on an assembly line, so
the critical thing needed in this example is to reduce the number of variables being input
into the quality control data mining algorithms to as few as possible and yet maintain
95–99% quality wafer chips coming off the assembly line.
in the data miner workspace shown in figure 5.16, a node called analyze variable lists
to determine categorical variables is added. when this node is run, it makes two wafer
yields data sets. they will be used to make scatterplots by time (figure 5.17); note that this
is a special feature selection icon, with a different name than the one in the previous
example.
in this case, we are asking for the top 25 predictor variables based on the chi-square
method. thus, those variables having the 25 highest chi-square scores will be selected;
the p-values are also given, but the variables are ordered in decreasing value of the
chi-square, as seen in the results table in figure 5.18.
figure 5.14 addition of duplicate training data icon and standard classification tree icon to this data miner
project. -
0
ingilizcesi çok iyi olan buraya
beyler bunun gibi 20 tane sayfayı türkçeye çevirmem lazım diğerlerini büyük ölçüde hallettim fakat yetişmeyecek gibi bunu çevirecek olursa valla süper olur
The Data Miner Workspace that has been added to this duplicate Training data
set and the classification tree algorithm now looks like Figure 5.14 after running this
algorithm.
And the Classification tree results, obtained from this Results icon in the green panel on
the right, and illustrated in Figure 5.15, shows in this simple example that only one variable
could be used to determine if a particular credit transaction was advisable. (Of course, it is
more complicated than this, and on the DVD accompanying this book is a complete Credit
Scoring Tutorial to work through, to see further details, and discover how accurate this one
variable is in making a good judgment about any one applicant for credit.)
However, you can make all kinds of variations with the flexibility and customization
available in this software, so one will be demonstrated in an example using a computer
chip/wafer chip manufacturing data set consisting of 2858 variables and 2062 cases. Naturally,
2858 variables are too many to keep track of for quality control on an assembly line, so
the critical thing needed in this example is to reduce the number of variables being input
into the Quality Control Data Mining algorithms to as few as possible and yet maintain
95–99% quality wafer chips coming off the assembly line.
In the Data Miner Workspace shown in Figure 5.16, a node called Analyze Variable Lists
to Determine Categorical Variables is added. When this node is run, it makes two Wafer
Yields data sets. They will be used to make Scatterplots by Time (Figure 5.17); note that this
is a special Feature Selection icon, with a different name than the one in the previous
example.
In this case, we are asking for the top 25 predictor variables based on the Chi-square
method. Thus, those variables having the 25 highest Chi-square scores will be selected;
the p-values are also given, but the variables are ordered in decreasing value of the
Chi-square, as seen in the results table in Figure 5.18.
FIGURE 5.14 Addition of duplicate training data icon and standard Classification tree icon to this Data Miner
project.
90 5. FEATURE SELECTION
I. -
0
beyler c sharp çizim bilenler gelsin
şimdi benim tasarım projem için mantık devrelerindeki nand,nor,xor, xnor kapılarını çizmem gerekiyor.and,or,ve not kapılarını çizmiş bulunmaktayım isteyene gönderirim kodları lütfen pm atın ilgili panpalarım.bu işi yapmam lazım. çizemiyorum hesaplayamıyorum bunları lütfen yardım edin - daha çok