Creating objective evaluations, or the possibilities of similarity analysis

László PITLIK, István Pető, László BUNKÓCZI

, ,

SZIE, Institute for Business Analysis and Methodology

„Futures Generation for Future Generations”

World Futures Studies Federation (WFSF) 19. World Conference

Budapest, 2005. VIII. 21-24., Corvinus University

Introduction

The aim of the conference: to give a chance for opinion exchange between generations, to have constructive common thinking. The sections of the conference include a wide range of research areas from demography, across standard and deviant self behaving forms, till the future forming role of informatics, till the development of communities, till the gender issues and till comparing cultures.

The facts and the drawable conclusions with given authenticity can be explored from all the problem groups separately. Nevertheless, when the informational society is defined as the effort towards the maximal partial efficiency (automation), than the question may be raised, what is the common in the topics of the sections of the conference? Is it possible and do we have to handle in a unified way (systematically and automated) the similar problems?

The Department of the Business Informatics introduced in national and in international levels in 2004 the COCO method (Component-based Object Comparison for Objectivity – up to now with 63 Hungarian and foreign language theoretical essays and case studies:

The essence of the method: on the base matrixes of arbitrary space and time connected (time series based, generational based) objects (countries, cultures, people, enterprises, products, services, etc.) and arbitrary measurable and observable attributes (object-attribute), what kind of expert attitude free (weighting, pointing), valid for present and future awarding (below and higher estimation, object specified danger and choice positions, expectable directions of state change, not valuable factors – like brand, culture, religion) can be deducted for the objects.

In the case of those questions, where the objective approach is shadowed by the missing statistical data assets, the task of the research on the base of the discrete experiences concluded from the point based data is to point what kind of data are required in international level, and what are those automatically dissections (cf. SWOT) that are able to show, monitoring and avoid the loosing balance situations (cf. sustainability, decision support).

The exigent science-fiction literature, as a kind of future (picture) factory shows clearly, that the balance of social processes are assured in that way, when the human subjectivity in operative level is crowded out from the definition of sustainability, from monitoring the loosing balances, from making the necessary decisions. The democracy in the informational society is not else than accepting on the base of majority, introducing, holding (operative) the scientifically based strategically relevant automatisms (indicators, simulations, decision support) which are formed after the relative unlimited access to data, to analysing methods and to education.

In other approach the similarity analysis aiming for objectivity may give the common all in one mathematics (polito-metrics) of the multi-cultural, of the positive discrimination and racism. A good example for it is a project plan ( examining the EU countries in getting more and more European. The essence of polito-metrics in this case: whether a group can be formed to show what can be named as European pattern on the base of arbitrary social indexes, than the distance of any other country from it can be calculated objectively. Along this calculation it may turn out that the examined country is much more European or if not then the factors can be defined why it is far from the others (cf. SWOT). In a certain case the religion’s effect/connection can be numbered against a certain value (e.g.: GDP, number of suicides, migration, unemployment). In that case when the religion shows a significant negative connection with important attributes than the mathematical equation is ready for describing racism, or against with it the equation for positive discrimination.

The role of similarity analysis in futurology

Developing, changing of systems without goal may be arbitrary scaled and formed. And like this – can’t be analysed in essence (cf. with occasion). Whether the goal of the human societies is the happiness, and the happiness is defined as the recognised objective necessity (cf. with Montesquieu), than for future generating the necessary strategic frames are already defined. Though the „happiness” can’t be measured, but the recognised necessity may be measured with enough simply The recognised necessity is not else than having correct forecasts (for arbitrary timeline and content, as accurate as possible). Otherwise the calculable, or with fine word sustainability (cf. with a legitim choose between projected scenarios), and improving the automation of routine tasks.

In this society picture which seems to be so automatic, man doesn’t loose it`s creative, intuitive and associative character. The base for achieving these special human capabilities is that if the man is really Human, or it can`t be automated, until it moves on a knowledge level which hasn`t been explored yet (cf. successful research engineer vs. stahanovist line worker). The informational society, the knowledge management etc. modern phrases certifies the recognition of this new direction.

Creating futures is not an ideological, not an unrealistic self objective theory, than a mix of the basics of system theory, of the self justifying empirical approach and automation (cf. cybernetics) which includes the principles of adaptation, as it should be, like it has been defined by itself for the experimental science for centuries. Thus the futurology is an experimental science, as the panning of future forecasts can be examined well, and can be expected that with interpreting the logical errors behind misses the objective statutories will be recognized with higher and higher safety.

The role of the similarity in this process is complex. The potential futures where the specification of the potential changes of the arbitrary attributes of the arbitrary objects is the goal, may be diverse. After the experiences the changes affecting commonly numerous attributes and objects can’t be arbitrary eccentric and extreme. The common change of the elements of one system has to lead to a consistent future. It’s not allowed to think in forecasting partial phenomena, as in a concrete question to choose the best in advance (the closer to the real in the future) is almost impossible (cf. with the probability of a discrete estimation). Though it’s not self understandable which complex error structure has more advantage in case of common movements of more factors, but that will have a serious advantage who tries to see the future in it’s whole complexity. It’s important or outline that similar consistent futures may exist in the same moment of an analysis. This recognition shows the movement possibilities of the individuals (and of the chaos too). In case of an individual the sovereignity is supposed, the statistical average reaction may affect a system itself in case of a large human group (cf. with Asimov: Sheldon theory).

In consistency an important part aspect is, that for all object/attribute, which are in heavily extreme position against the other objects, are interfered by a stronger and stronger force for back alignment. The future changes can be described after the passing of time on one hand, than the differencies in space, and thirdly on the base of field of forces described by diverging similarities in that way these field of forces are not independent from each other, that’s why they are capable for checking each other.

In terms of the principles of the conference: Our goal for the conference is to generate futures “far enough away to create, but close enough to live in.”

Instead of the unique analysis in the abstract (having a few dozen case studies showing the possibilities in COCO methodology) shall stand here a not complete but enough detailed list about what type of analysis can be performed (automatically after collecting the base data) for relevant but enough far time where/when the expectations are drafted.

-Meteorology, geology, hydrology (etc.): measuring the environment continously and more and more frequently the common changes can be foreseen with higher accuracy for more and more far.

-Stock market, price analyses: for calculating the group behaviour samples analysing the stock market changes is a good example.

-Headhunter corporations, suitability examinations for certain jobs, IQ and other psychological tests, grafology!, astrology?, palmistry?: the expected utility and future can be explored of one individual after the searched and the given profiles.

-Project view, monitoring principles, chance equilibrium, precedent theory: The efficient work organization and law application shows a good example for the similarity and fact based process control.

-Public use data, constitutional rights: The key of sustainable development is, if anybody may know all facts, anyone may compare any object with each other if they seems to be similar, and after this analysis may feel himself as retrograde, as winner and as in equilibrium state (cf. with gender problem, state of minorities). …

The COCO methodology

Consistency

Connection between generations and COCO similarity analysis

Firstly it’s worth to clarify how the „generation” phase can be interpreted:

-in classical term: age groups following each other,

-in globalisation approach: similar objects in different development phase,

-universal: one after the other.

In a similar way the process COCO-based similarity analysis has to be given briefly:

-comparing objects with the same type of indexes,

-to rate the indexes without (pointing, weighting) experts,

-to explain objectively the observed differences in case of given goal phenomena,

-which analysis’s objective may be forecasting, distance identification, grouping.

What kind of connection may stay between the two concept/idea:

-dependent/minor connection, when one of the idea can be interpreted as the part of the other

-adjunct connection, which case has to be unfolded to different formations:

  • the two phenomena has a common set,
  • the two phenomena doesn`t have a common set, only loose associative connection can be discovered.

Whether the similarity analysis, is seen as the process of getting known of the real life, then from those three definitions given above, into the common sets between the two concept, the followings may get as the main parts of the main concepts:

-on arbitrary one after the other objects (as the base elements of the case based reasoning)

-executing analysing processes.

The existence of generations (in space and in time) may serve as a base pattern (base data, as benchmark) for the dissection. For the self evaluation of dissections, or in case of specifying the condition set for quitting a learning process the generational style gives the basics for the definition of the learning and test data, for the experienced older expert’s heuristics in error search strategy for consistency principles.

In other approach the generation is not only generation than is in connection with the generating phase. The connection of this word which means to generate an action and the similarity analysis is much more dependent/minor, than before, as generating (creating one or more element/object within possibility circle) is a concept giving a frame for automated dissections. Generating futures supposes the generation of mechanisms calculating themselves which process can be divided for the definition of data (inside in it for generating fictive data), for specifying the operations, where on the base of them the models (source codes) can be generated, or defining the evaluation mechanisms of the generated models (consistent exiting condition from learning), which checking algorithm can be seen as the part of the model, and this supposes source code generation.

Summary

Though the modelling’s (future generating) base question („between two models which is the better?”) means a never solvable philosophical borderline, with building up more and more sophisticated model (self-) checking frame systems, either with the consistent integration of knowledge from the patterns of the known supposed past, more and more automatic (without ideological basics) deduct able the potential (alternative) complex common changes. The deduction’s methodology will be able to fine itself after the fact-estimation comparisons and along the complex analysis of the ways toward the comparisons. The philosophy of the recognized necessity in this way may be able to develop the futurology to an industry like service. As the future is never knownable till the whole deepness, thus the social and individual handling of the always existing estimation error and sustaining model factories may move out the people living in the informational society from the recent development orbit which is based on monopolisation/mystery (cf. data protection), hoax (cf. marketing), corruption (cf. with the Hankiss common meadow model), in that case when it keeps genetically the tension in liveable range between the instinct based (e.g. mimicry) and conscious acting (social optimum)…

References