A Polynomial Time Algorithm
for Matrix & Graph Isomorphism


For my lost cat Elsie
October 9, 2004
Michiro Nasu

==========================
       Introduction
==========================

For Graph Isomorphism Problem many fast algorithms were developed. Among them "nauty" by B. D. McKay is well known and may be the fastest [1]. Our primary aim is to develop a provable polynomial time algorithm for graph Isomorphism. A few years ago we developed an experimental program named "Narkissos", which analyzes every spanning trees with the root of each vertex and computes some invariants we devised. On every stage of an experiment it eliminates a vertex from the graph and call itself recursively. This is a very effective method, hence it runs pretty fast and solves any examples as far as we've tested. It never makes a back-tracking so we claimed that it is polynomial time, but failed to prove it as we could not give any clear reasoning for the sufficiency of those graph invariants somewhat complicated and obtained just empirically.

On the other hand our challenging Elsie is oriented to matrix Isomorphism, and designed to be as clear as possible to fit theoretical criteria and to possibly give a final proof. At first we began constructing an extra matrix called Delta matrix (this is entirely another thing from described in this brief) derived from a distance matrix of a graph, and try to compute some huge numbers as graph invariants such as PSI numbers learning the experience on Narkissos. It ran relatively fast and solved many problems in quite constant time like a clock. But we understood that the method would not be perfect as the minimum PSI number was beyond the coral reef. Eventually we found that we were already out of graph and blindly getting a move on to more general problem, ie, Matrix Isomorphism.

We regard a matrix M a sort of distance space. An entry of M is a distance from a point p to another point q. This distance space is not at all geometrical one but much more abstract. In such an aspect, graph is a subclass of matrix. Our approach is very close to the method taken by Boris Weisfeiler et al [2]. Conversely they call a matrix a graph. This object was independently discovered by D. G. Higman [3] who called his formulation a coherent configuration or adjacency algebra, on which Boris gave another name, Cellular algebra. Some difference may exist between our construction and their theory. They showcased an algorithm for matrix Isomorphism, but unfortunately it lacks reality and remains on a shelf, while Elsie is alive here and demonstrating evidentially.

We make a dynamic analysis of an object with some distance measure down to the depth level 3.5, which takes about O(n^18) time cost in our current implement. It is almost sure (and probably it is not so hard to build such a program) that the time-complexity of matrix/graph Isomorphism never exceeds O(n^logn), ie, our problem has its own bottom in quasi-polynomial time. Don't say that even the ocean has her bottom in the depth. Once a thorough but closed inspection is done, the orbits of the automorphism group of the matrix is already in our hand, and we just make some free choice to fix the ordering. The count of the free choice is exactly the number of the symmetry axes of the matrix/graph.

According to a book by C. L. Liu, "A partition of a set has two things, an enough information to distinguish two elements in different blocks and an ambiguity of indistinguishable two elements in the same block" [4]. Agreeing with his philosophical reflection, we developed the heart of our experiment which we call Critical State Experiment. Our method is basically to partition an object into blocks and eventually obtain a set of trivial blocks, ie, a canonical ordering of the point set. The PSI number is used only for the verification of the resulting canonical ordering here, but meaningful as it meets the Delta number sequence at the end of the experiment.

We have several measures to partition a matrix M into blocks. However once a split happens on a matrix, it propagates to every direction on the matrix and causes what we call a fission chain reaction of the blocks. How to control this disastrous decay is the most problematic point, and we solved it. That is the Critical State Experiment. It is a heavily multiplied circulation process, and we sometimes call it a Reactor. Once a Critical State Experiment is executed, then the matrix turns to what we call a metricative matrix. In other words, it is considered that inside the Reactor the matrix is so to say being in some kind of dis-metricative status, though it is really intermediate and temporal. The Reactor is able to manage this situation in a strictly deterministic way.

We speculate that Matrix Isomorphism is a sort of face-recognition, then our experiment shall be nothing but a biometrics. We do not yet get the proof of the algorithm but believe that we are gradually approaching the goal.

==========================
       Definitions
==========================

D1. Let X and Y be sets of points and Z be a set of numbers, |X|=m, |Y|=n. An mxn matrix M(X,Y) is a function M:XxY→Z such that M(p,q)=z \in Z,p \in X,q \in Y and we say that the value z is a distance d(p,q) from a point p to a point q. If X=Y we may neglect the point sets assuming that X=Y={1,2,...,n}. XxY denotes M if it is apparent.

Note: A natural generalization of a matrix comes like this: Let X,Y and Z be sets of elements. An mxn matrix M(X,Y) is a function M:XxY→Z such that M(p,q)=z \in Z,p \in X,q \in Y. At D7 we introduce a matrix of this kind.

D2. Let M(X^2) denote an nxn matrix M, where X is a point set. An nxn matrix M(X^2) is isomorphic to a matrix M'(X^2) iff there exists a 1-to-1 function f:X→X such that for all p,q \in X: M(p,q)=M'(f(p),f(q)). If the matrices M and M' are identical, then the function f is called an automorphism of M. For any matrix M(X^2) their exists a trivial automorphism f such as f(p)=p. Sometimes M=M' represents that M and M' are isomorphic.

Note: The set of automorphism of a matrix M(X^2) forms a subgroup of the permutation group on X. It is easy to see that the relation AUTOMORPHISM(p,q) is an equivalence relation, hence it partitions the set X into equivalence classes, which is called the orbits @ of the automorphism group of the matrix.

D3. Suppose an mxm matrix M(X^2) and nxn matrix M'(Y^2), where |X|=m and |Y|=n. If there exists an onto function f:X→Y such that for all p,q \in X: M(p,q)= M'(f(p),f(q)), then f is a homomorphism from M to M', and M'(Y^2) is called a homomorphic image of M(X^2). If X=Y, then f turns to an isomorphism of M to M'.

D4. Let M(N^2) be an nxn matrix on N={1,2,...,n}. We assume that a point i \in N is an indexed/labeled element of the point set by the number i. We assign a number P[i] called a point PSI number to each point i of the matrix M like,

P[i]=\sum[j=1 to n]M_ijX^(n-j), where X>M_ij for all i,j \in N.

D5. A matrix PSI number P of a matrix M(N^2) is defined as follows using the point PSI number P[i] of each point i \in N.

P=\sum[i=1 to n]P[i]Y^(n-i), where Y>P[i] for all i \in N.

Note: Given two matrices M(N^2) and M'(N^2), let P* and P'* be the sets of all PSI numbers of M and M' respectively. Then from the definition, M and M' are isomorphic iff there exists PSI numbers P \in P* and P' \in P'* such as P=P'. (We once showed that if M is isomorphic to M', then P*=P'*.)

D6. A block B,|B|<=n is a subset of the point set X of an nxn matrix M(X^2). A partition PI of M is a set of disjunctive blocks B of M such that \sum|B|=n, B \in PI. We say that a partition PI of M sections the matrix M into cages {AxB}, where A and B are blocks in PI. A cage C=AxB is an m′xn′ matrix such that |A|=m′ and |B|=n′, that is, a submatrix M(A,B) of M. A cage AxA is said to be a birdcage or a core part of the matrix M. This definition of a block and a partition are naturally extendable to mxn matrices and metrixies(see D7).

Note: In our experiment, we provide two partitions K and PI, where K is ordered and PI is the refinement of K but not ordered. We assume this all through the brief with a few exceptions, where the distinction is neglected.

D7. We define an extension of a matrix called metrix of which entry is not a number but a pair of numbers. A distance set D is a set of ordered pair d=(x,y) of numbers x and y, where we call d a distance metric, x a left distance and y a right distance. We let d.x denote a left distance x, as well as d.y represent a right distance y.

Let X and Y be sets of point, |X|=m, |Y|=n and D be a set of distance metrics. An mxn metrix Z(X,Y) is a function M:XxY→D such that M(p,q)=d \in D,p \in X,q \in Y and we say that d is a distance metric briefly metric from a point p to a point q. A metrix Z is considered to be an ordered pair of two matrices M and M' such that Z=(M,M'), Z.x=M, Z.y=M', ie., Z(p,q)=d \in D, d.x=M(p,q) and d.y=M'(p,q). We may call the first half of the metrix Z the left Delta matrix Z_L and the second half the right Delta matrix Z_R respectively.

D8. Let Z(N^2) be an nxn metrix on the number sequence N={1,2,...,n} sectioned by an ordered partition K and let K[i] denote the block order in K of a point i. Then we assign a distance metric D[i] called a point Delta number to each point i \in N like,

D[i].x=\sum[j=1 to n]Z_ij.xX^(K[j]-1), where X>Z_ij.x for all i,j \in N.
D[i].y=\sum[j=1 to n]Z_ij.yX^(K[j]-1), where X>Z_ij.y for all i,j \in N.

D9. A matrix Delta number D of a metrix Z(N^2) is a distance metric defined as follows using the point Delta number D[i] of each point i \in N.

D.x=\sum[i=1 to n]D[i].xY^(n-i), where Y>D[i].x for all i \in N.
D.y=\sum[i=1 to n]D[i].yY^(n-i), where Y>D[i].y for all i \in N.

D10. An isometrie is a sequence {z,z,...,z} of the same number z, in other words a multiset of a number z. The length of an isometrie is called the valence of the isometrie. An isometrie has a parameter set (w,v,c), where w is the weight, (the value z) v is the valence and c is the cleavage (see D20) of the isometrie.

D11. A polymetrie is a number sequence in the increasing order, ie, an ordered multiset of numbers. A polymetrie is said to consist of isometries and has a parameter set (w,l,m), where w is the weight, l is the length of the polymetrie and m is the number of isometries in it, ie, m is the number of different values in the polymetrie. The weight w of a polymetrie(w,l,m)={z_1,z_2..,z_l}, z_1≤z_2≤...,≤z_l is defined as,

w=\sum[i=1 to l]z_iZ^(i-1), where Z>z_i for all 1≤i≤l.

D12. A polymetrie P!(x,B):(w,l,m) defined below is said to represent a distance d(x,B) from a point x to a block B in a matrix M. It is an ordered number sequence {x1,x2,...,x_l},l=|B|, where x_i=M(x,p_(j)) is a point distance from the point x to a point p_(j) \in B. The weight of the polymetrie P!(x,B) is computed as defined at D11, and it turns to a distance as P!(x,B).w=d(x,B).

Similarly, a polymetrie P!(A,y):(w',l',m') is said to represent a distance d(A,y) from a block A to a point y in M. It is an ordered number sequence {y1,y2,...,y_l'},l'=|A|, where y_i=M(p_(j),y) is a point distance from a point p_(j) \in A to the point y. The weight of the polymetrie turns to a distance from block A to point y as P!(A,y).w=d(A,y).

D13. A block polymetrie P![A,B] and a distance metric d[A,B] are defined using point to/from block polymetrie P!(x,B)/P!(A,y) and point to/from block distance metric d(x,B)/d(A,y).

Suppose a cage C=AxB of a matrix M. If for all x \in A: P!(x,B) is equal, then the cage C has a left polymetrie P![A,B) otherwise it does not. If for all y \in B: P!(A,y) is equal, then the cage C has a right polymetrie P!(A,B], otherwise it does not. The number sequence of a left polymetrie is viewed horizontally in a row of C, as well as the number sequence of a right polymetrie is viewed vertically in a column of C. With respect to polymetries weight, they are defined as P![A,B).w=P!(x,B).w and P!(A,B].w=P!(A,y).w. Consequently d[A,B)=d(x,B) and d(A,B]=d(A,y).

If the cage C=AxB has both of the left and right polymetrie, then the cage has a block polymetrie P![A,B]=(P![A,B),P!(A,B]), otherwise it does not. A block polymetrie P![A,B] is an ordered pair of polymetries P![A,B) and P!(A,B]. The block distance metric d[A,B] is identified with the weight of the polymetrie.

d[A,B]=P![A,B].w=(P![A,B].w.x,P![A,B].w.y)
=(P![A,B).w, P!(A,B].w)=(d[A,B),d(A,B])

Thus a block distance metric d[A,B] has two elements, the left distance d[A,B) and the right distance d(A,B]. Hence it is considered to be a kind of vector. If the cage C=AxB has not the block polymetrie P![A,B], then the distance d[A,B] is UNDEFINED.

Note: Sometimes P![A,B] may be abused for P![A,B].w as above. Since we did not assume that our matrices are diagonally symmetric, d[A,B] is not necessary equal to d[B,A]. If blocks A and B are trivial, say {p} and {q}, then:

d[A,B]=(d(p,{q}),d({p},q))=(d(p,q),d(p,q))=(M(p,q),M(p,q))=(d,d),

where d is a point distance d(p,q)=M(p,q) from p to q. Thus the trivial block distance d[{p},{q}] is always available as well as point to block/block to point distances. A trivial block distance d[{x},{y}]=(d,d) and the point distance d(x,y)=M(x,y)=d are all the different. That is, a point distance and a point to/from a block distance are scalar values, while a block distance metric is a vector. With respect to polymetries and distance metric, we use [] for vectors and (),[),(] for scalars.

D14. A birdcage C=AxA has a block polymetrie P![C]=P![A,A] iff for all x \in A: P!(x,A) is equal and P!(A,x) is equal respectively. In such a case, it is easy to see that P!(x,A)=P!(A,x) for all x. Consequently P![C]=P![A,A]=(P![A,A), P!(A,A]). Then we have a block distance metric d[C]=P![C].w=d[A,A].

Note: The distance metric d[C] of a birdcage C=AxA is considered to be a kind of radius, but it may be more suitable to regard it an identity of the block A.

D15. An mxn matrix M(X,Y) is IRREDUCIBLE iff the point set pair (X,Y) has the distance metric d[X,Y]=d[M], ie, M has the block polymetrie P![M]=P![X,Y].

Note: An nxn IRREDUCIBLE matrix is a generalization of a Latin square. Ie, it allows a number x appears in a row/column multiply, while Latin square does not, but for both every rows and columns must have the same number set. In other words, a Latin square is an nxn IRREDUCIBLE matrix M(N^2) such that the valence of every isometrie of M is one.

D16. An nxn matrix M(X^2) sectioned by a partition K(alternatively PI) is metricative iff for every cage C=AxB, A,B \in K: C is IRREDUCIBLE. In other words, all block pair (A,B) \in K has its distance metric d[A,B], and no block distance is UNDEFINED; otherwise we say that the matrix M is dismetricative. When a matrix M is metricative, we may say that M is normally partitioned and the partition K is a normal partition with respect to the distance metric.

D17. An nxn Delta matrix Z(N^2) is an nxn metrix derived from an nxn matrix M(N^2) sectioned by a partition PI(possibly an ordered partition K) such that its distance metric Z_ij is defined applying point-block distance measure like,

Z_ij.x=P!(i,PI[j]).w=d(i,PI[j]), and
Z_ij.y=P!(PI[i],j).w=d(PI[i],j),
where PI[i] is the block to which the point i \in N belongs. As was noted at the foot of D13, point-block distances are scalar values and they are available at any time, even if the matrix is not yet metricative. In other words, a Delta matrix is always *constructible* from the matrix M. If M is metricative, then as every cage AxB comes to have its distance metric, the distance metric Z_ij turns to a simpler form,

Z_ij=d[PI[i],PI[j]].

Since the Delta matrix Z is a *metrix* defined at D7, each point i \in N has its point Delta number D[i] and the metrix Z has its matrix Delta number D provided Z is accompanied with an ordered partition K. Generally an unordered partition PI is used to build the Delta matrix and an ordered partition K is used to compute the Delta numbers. The matrix Delta number of a metrix Z is said to be the matrix Delta number of the matrix M from which Z is derived.

Note: If the partition K is total, then it comes to Z_ij.x=Z_ij.y=d(i,j)=M_ij. Hence it is easy to see that the point Delta number D[i] coincides to the point PSI number P[i] like D[i].x=D[i].y=P[i]. Similarly the matrix Delta number D of Z coincides to the matrix PSI number P of the matrix M like D.x=D.y=P even if the matrix M is not diagonally symmetric.

D18. A section H=M\PI is a derived metrix from an nxn metricative matrix M sectioned by a partition PI such that the point set of H is the blocks in PI and a distance metric is the block distance d[A,B] of the blocks A,B \in PI.

Note: It must be noted that a section H=M\PI is established iff every cage (including birdcages) has its distance metric d[A,B]. However as noted at the foot of D13, trivial block distances are always available, hence any matrix M has at least one valid section H. Let call it a trivial section H*, then H* corresponds to the trivial automorphism of M. It is easy to see that a section H is a homomorphic image of a Delta matrix Z defined at D17.

D19. Suppose two nxn metricative matrices M and M' and their Delta matrix Z and Z' with partitions K,PI and K',PI' respectively. Let the matrix Delta number of M and M' be D and D' respectively. Then we say that the matrices M and M' are D(elta)-isomorphic iff D=D'.

Note: Suppose sections H=M\PI and H'=M'\PI' of metricative matrices M and M'. When it happens D=D', then the Delta matrices Z and Z' are isomorphic, as well H and H' are isomorphic. Ie, D=D' ↔ Z=Z' ↔ H=H' ↔ M and M' D-isomorphic.

D20. An isometrie(w,v,c) on an metricative matrix M can be seen as a v-regular graph in the following sense. Consider a birdcage C=AxA sectioned by a partition PI. By the definition, every point \in A has the same polymetrie and the same isometries. Let one of isometries be I(w,v,c)={z,z,..,z} and let H be a graph H(V,E) such that V is the block A, and for all p,q \in V: e(p,q) \in E iff M(p,q)=z. Clearly H is a v-regular graph, and such a regular graph always exists as far as the matrix M is metricative.

We will call such a regular graph H the v-factor of the isometrie, and if H is disconnected, we say that the isometrie is cleaving. The cleavage c in the parameter (w,v,c} is assigned by the number of the connected components of the v-factor H of the isometrie, ie, if c>1 then the isometrie(w,v,c) is cleaving.

Suppose a cage C=AxB of a metricative matrix M sectioned by a partition PI, and polymetries P![A,B):(z,l,m) and P!(A,B]:(z',l',m). Since M is metricative, the numbers m of isometries are equal. let P![A,B) have an isometrie (w,v,?) and P!(A,B] have an isometrie (w,v',?), where lv=l'v' and their weight w are the same. Now we make a bipartite graph H(A,B,E) with lv=l'v' edges such that for all p \in A, for all q \in B: e(p,q) \in E iff M(p,q)=w. Obviously the bipartite graph H is (v,v')-regular, and if H is disconnected, then every connected component of H is also a (v,v')-regular bigraph. Let the number of the connected components of H be m. Then the cleavage field c of an isometrie (w,v,c) is filled by m. If c>1, then we say that the isometries are cleaving.

Note: Regarding polymetries, there exists a distinction, left or right. It is the same for isometries. However we neglect it with some reason, like as mentioned above a left/right polymetrie has the same set of isometries.

D21. An nxn matrix M(X^2) is COMPLETE iff for any pairs (p,q) and (p',q'), p,q,p',q' \in X: M(p,q)=M(p',q'), p=\=p' or q=\=q' and for all points p,q \in X: M(p,p)=M(q,q). As well an mxn matrix M(X,Y) is BICOMPLETE iff for any pairs (p,q) and (p',q'), p,p' \in X and q,q' \in Y: M(p,q)=M(p',q'). Of course a trivial matrix M({x})/M({x},{y}) is COMPLETE/BICOMPLETE respectively.

D22. An mxn IRREDUCIBLE matrix M(X,Y) is CRYSTALLOID iff there exists no cleaving isometrie in the matrix polymetrie P!(M)=P!(X,Y). An nxn metricative matrix M(X^2) sectioned by a partition PI is crystalloid iff every cage C=AxB, A,B \in PI is CRYSTALLOID, ie, having no cleaving isometries.

D23. A pair of points p and q of an nxn matrix M(X^2) are similar iff for some automorphism f of M, f(p)=q. If every pair of points p,q \in X are similar, then the matrix M is said to be SYMMETRIC. If a SYMMETRIC matrix M is at the same time CRYSTALLOID, then M is said to be a CRYSTAL.

Note: It is easy to see that if the orbits @ of a matrix M has only one element X, then the matrix M is SYMMETRIC. If the orbits @ are totally trivial, then we say that the matrix M is an IDENTITY matrix. A SYMMETRIC matrix is always IRREDUCIBLE. Trivially a COMPLETE matrix is SYMMETRIC.

D24. An nxn IRREDUCIBLE matrix M(X^2) is SUBSYMMETRIC iff for all pair of points p and q \in X: M((X-p)^2) and M((X-q)^2) are D-isomorphic, ie, their matrix Delta numbers are equal. Trivially a SYMMETRIC matrix is SUBSYMMETRIC. If M is SUBSYMMETRIC but not SYMMETRIC, then M is said to be PARASYMMETRIC.

Note: Regarding the terminology, capital letter terms are all considered to be properties of matrices, while small letter terms, eg. metricative, crystalloid etc are the status of matrices in the experiment which depends on the partition applied, hence it varies as time going. An metricative matrix corresponds aka a romanic matrix, and a crystalloid matrix to aka a grecoromanic matrix.

D25. Let X be a set and M(X^2) be an nxn matrix on X sectioned by an ordered partition K. We define an operation on a matrix M called split. A split M|p is a refinement of a partition K of the matrix M by splitting a point p from the block B to which p belongs. That is, a split M|p,p \in X is a derived matrix from M partitioned by an ordered partition K* in the following way. Let K* be the product of K and a partition {{p},X-{p}}, ie., K*=K*{{p},X-{p}}. We assume that K* is ordered naturally. Then a split M|p is the matrix M sectioned by the ordering K*. In similar way a subblock split M|B,B \in PI is defined with an unordered partition PI and an ordered partition K applying the production K*=K*{B,X-B} on the matrix M.

D26. A segregation is a partitioning of a matrix M(X^2) caused by only the existing differences between points and blocks respectively in the view of distance measure with respect to the matrix. If such a difference exists then we say that the matrix is segregative, otherwise the matrix has no more segregation. We recognize the following as the distance measure of the matrices. Except the measure (4), every measure effects the ordering of points/blocks directly as they are numerical/comparable measures.

(1) bi-metric: point distance d(p,q)=M(p,q) (2) poly-metric: point-block distances d(p,B),d(A,q) (3) poly-metric: block-block distances d[A,B] (4) iso-metric: unordered division by isometrie cleavage (5) for each ordered and unordered block B: block size (6) for each point p: point Delta numbers D(p) of the Delta matrix Z of M (7) for each point p: Delta numbers D(Z(p)) of submatrices M((X-p)^2) (8) for each point p: evaluate split M|p by the measure (7) (9) for each block B: for each subblock S \in B: evaluate split M|S by (8)

Note: All the measures except (1) depend on a partition applied, and of course the partitions vary as time going. "Segregation" means that a time sequence of the partitions K and PI are deterministically and label-independently decided. We assume that we "evaluate" the objects always in this meaning and manner, while we say that the evaluation is done "segregatively".

In the algorithm described at the next chapter segregation is performed by 4 procedures, CriticalState < KU-Experiment %lt; NutsCracker < WakeUpInnocent, where < denotes responder vs. caller relation. Measures are shared like,

[1] CriticalState (1)(2)(3)(4)(5)(6)
[2] KU-Experiment (1)(2)(3)(4)(5)(6)(7)
[3] NutsCracker (1)(2)(3)(4)(5)(6)(7)(8)
[4] Wakeup Innocent (1)(2)(3)(4)(5)(6)(7)(8)(9)

In an actual implement, we do not take measures (6) and (7) directly, rather we compare the points by lexicographic order both of rows/columns of the Delta matrix, of course substantially doing the same operation. Segregation is a terminology borrowed from Crystallography as well as cleavage and crystalloid.

D27. An metricative matrix M with partitions K and PI is in the prematurity state iff M is totally partitioned by the ordered partition K into SYMMETRIC blocks, and M is in this state only by segregations and there exists no more segregation with the distance measures (1)-(8).

D28. A premature matrix M with partitions K and PI is in the maturity state iff M is totally partitioned by the unordered partition PI into CRYSTAL blocks, and for every block B \in K, for all pair of subblocks S,S' \in B: splits M|S and M|S' are D-isomorphic, and M is in this state only by segregations and there is no more segregation with all the distance measures.

Matrix Isomorphism Problem: Suppose nxn matrices M(X,X) and M'(X,X). If M is isomorphic to M', ie, if there exists a 1-to-1 function f:X→X such that for all p,q \in X: M(p,q)=M'(f(p),f(q)).

==========================
        Algorithm
==========================

A1:Echoing Method
1. Given nxn matrix M and M'
2. For each matrix: call function GetGraphInvariant, and obtain the reference
Psi numbers P and P' of M and M' respectively.
3. If P=P' then print "ISOMORPHISM" else print "NONISOMORPHISM"
4. Halt

F1: Outline of the Experiment (GetGraphInvariant)


Initialize->Pretest-+->Prematurity Phase-+->Maturity Phase-+->Breeding-+->End
                    |                    |                 |           |
                    +----------<---------+--------<--------+-----<-----+

Function GetGraphInvariant(M) 1. Given nxn matrix M 2. Initialize ordered partition K, unordered refinement PI of K 3. Pretest: Do CriticalState(M,K,PI) 4. Do 4.1 Prematurity Phase: Do Segregation(M,K,PI) 4.2. If segregation on going, then continue 4 4.3.Maturity Phase: Do WakeUpInnocent(M,K,PI) If segregation on going, then continue 4 4.4.Breeding: Let B be the first non-trivial block \in K with higher prioity, where the priority is in the order, SYMMETRIC > CRYSTAL > COMPLETE Split(M,K,PI,x,x), where x is the top most index of B Do CriticalState(M,K,PI) 5. While not totally partitioned 6. Take out the resulting canonical ordering O* of M, and Permute M by O* to get the standard matrix M* 7. Compute matrix Psi number P* of M*, then return P* Procedure CriticalState(M,K,PI) (1) Given nxn matrix M, ordered partition K and the refinement PI of K (2) Do (2.1) Do (2.1.1) Do (2.1.1.1) Do (2.1.1.1.1) Do (2.1.1.1.1.1) Do (2.1.1.1.1.1.1) Do (2.1.1.1.1.1.1.1) Do (2.1.1.1.1.1.1.1.1) Do (2.1.1.1.1.1.1.1.1.1) Do (2.1.1.1.1.1.1.1.1.1.1) Make a left Delta matrix by K and sort the matrix (2.1.1.1.1.1.1.1.1.2) While K developing (2.1.1.1.1.1.1.1.1.3) Make a right Delta matrix by K and sort the matrix (2.1.1.1.1.1.1.1.2) While K developing (2.1.1.1.1.1.1.1.3) Sort the partition K by the size of blocks (2.1.1.1.1.1.1.2) While K developing (2.1.1.1.1.1.1.3) Make a left Delta matrix by PI and sort the matrix (2.1.1.1.1.1.2) While PI developing (2.1.1.1.1.1.3) Make a right Delta matrix by PI and sort the matrix (2.1.1.1.1.2) While PI developing (2.1.1.1.1.3) Sort the partition PI by the size of subblocks (2.1.1.1.2) While K developing (2.1.1.1.3) Do IsometrieCleavage(M,K,PI) for the core part of K (2.1.1.2) While PI developing (2.1.1.3) Do IsometrieCleavage(M,K,PI) for the cages of K (2.1.2) While PI developing (2.1.3) Do IsometrieCleavage(M,K,PI) for the core part of PI (2.2) While PI developing (2.3) Do IsometrieCleavage(M,K,PI) for the cages of PI (3) While PI developing Procedure IsometrieCleavage(M,K,PI) (1) Given nxn metricative matrix M with partitions K and PI (2) For all blocks A \in PI: For all blocks B \in PI: For all isometrie pairs Ih(w,v,c) and Iv(w,v',c) respectively in the polymetrie P![A,B) and the polymetrie P!(A,B]: Make (v,v')-regular bipartite graph H as described at D20, and get the connected components C={H_1,H_2,...,H_c} of H Make a partition pi[AB] for the cage AB by the vertex sets C (3) Update the partition K and PI by the partitions pi[AB] Procedure Segregation(M,K,PI) (1) Given nxn metricative matrix M(X^2), partitions K and PI (2) Do (3) Do Let D be the point set of trivial blocks in K Make KU-Experiment(M,K,PI,X-D) (4) While segregation on going (5) Do For all non-trivial blocks B \in K: If B is not COMPLETE, then do NutsCracker(M,K,PI,B) and if B is cracked, then break and continue (3) (6) While segregation on going (7) While segregation on going Procedure KU-Experiment(M,K,PI,B) (1) Given nxn metricative matrix M(N^2), partitions K and PI, block B \in K (2) For all point i \in B: Eliminate i from M and make a submatrix S[i]=M((N-i)^2) Do CriticalState(S[i],K',PI'), where K' and PI' are temporal partitions Get the matrix Delta number Z[i] of the submatrix S[i] (3) Get the ordering O of the points X by the order of Z[i] (4) Update the partition K and PI by the ordering O (5) Do CriticalState(M,K,PI) Procedure NutsCracker(M,K,PI,B) (1) Given nxn metricative matrix M(N^2), partitions K and PI, block B \in K (2) For all point i \in B: Duplicate M,K,PI into temporal M',K',PI' Split(M',K',PI',s,i), where s is the top most index of B, KU-Experiment(M',K',PI',B) Make Delta matrix Z' of M' by K' Get matrix Delta number Z* of M', and set it in Z[i] (3) Make an ordering O of the points by the order of Z[i] obtained (4) Update the partitions K and PI by the ordering O (5) Do CriticalState(M,K,PI) Procedure Split(M,K,PI,x,y) (1) Given nxn metricative matrix M(N^2), partitions K and PI, indices x and y (2) Letting the point at the index y be p, move p to the index x (3) Set up the point p as a singleton block {p} \in K (4) Update the partitions K and PI Procedure WakeUpInnocent(M,K,PI) 1. Given nxn premature matrix M, partitions K and PI 2. For all ordered blocks B \in K 2.1 For all unordered subblocks S \in B Duplicate M,K,PI into temporal M',K',PI' Split the block B giving the subblock S an order in K', Do Segregation(M',K',PI') Compute the matrix Delta number Z(S) of M' 3. Sort the Delta numbers Z(S) and get the ordering O. 4. Update the partitions K and PI with the ordering O.

Note: As mentioned at D26, actually we do not compute each matrix Delta numbers in the procedures KU-Experiment and NutsCracker, rather make an nxn metrix Z with entries of point Delta numbers and directly compare the rows in an lexicographic order. In other words we evaluate polymetries of the higher order metrix Z of which distance metric is a point Delta number, ie, some kind of sum of polymetries. This is mainly because to avoid over flow of the numbers.

==========================
          Claim
==========================

We claim the following with respect to the algorithm A1. We suppose that the claims raised here are sufficient for proving the algorithm to be a polynomial time algorithm for Matrix Isomorphism. C1 clearly holds. The claim C2 must be the key. C3 seems rather hard, but perhaps would be given eventually. If C3 holds, C4 is justified. C5 is accountable as we maintain the ordered partition K just for C5. Hence once C2 fulfilled, then other claims almost automatically come true. In other words, if our algorithm fails, then the reason is just one, it failed to crack a PARASYMMETRIC block into SYMMETRIC blocks. But we believe that it never happens, at the least we have never met a counterexample so far.

C1: It runs deterministically and halts in polynomial time.
C2: It cannot fail to reach a maturity state.
C3: When it reaches a maturity state, every block is an orbit of the group.
C4: Then it makes a non-deterministic choice of a point in a deterministic way.
C5: The resulting order is canonical.

I think that we need no explanation for C1. Obviously the algorithm A1 is of polynomial time because it has no recursive call, any back-tracking, nor an exponential size search tree. It has only a fixed depth of nesting loops of about O(n^18) in our current implement, although it is too heavy for a small computer. Clearly the loop count of each loop never exceeds n because the partition proceeds in irreversible way, hence the count is bounded at most n.

Intuitively it seems C2 clearly holds. Suppose a strongly regular graph G and a matrix representation M of G. Add a vertex p to G and connect it to any vertex x of G and let the matrix be M'. Obviously M' is easily breakable. We repeat this for all vertices x \in X and split the matrix M with the matrix Delta number set D[x]. In other words we seek a difference among the points of the object and find it as far as it is not SYMMETRIC. What the NutsCracker is actually doing is nothing but this operation. So it is almost sure that the NutsCracker is able to crack every PARASYMMETRIC blocks. Hence it cannot fail to reach a prematurity state, as well as a maturity state.

Principally C3 is easier than C2, just if we admit a premise, "every point in an orbit never has any difference between each other with respect to the distance measure". Then it is obvious that the segregation process never split any orbit in the matrix all through the experiment, and a set of SYMMETRIC blocks obtained at the exit of a maturity phase is really the orbits wholly preserved. So all of what we have to do is just to explain each provision (1)-(9) at D26 never violates any orbit in a matrix. Maybe it is somewhat annoying and may become a long description but basically not a difficult job.

After that the Claim 3 is fulfilled, then C4 is no more necessary, because we can select any point in any orbit freely and safely. As far as the objective matrix has some symmetry we have to make this free choice. Actually we select a point in the first block (orbit) in the partition as we have to complete the ordered partition for the sake of the canonical ordering. To evaluate the effect of the choice we return to the top of the process and continue the segregation until every symmetry of the sample is fixed.

As far as we follow the segregation measure described at D26, the partition K gives us an established unmistakable linear ordering. When it proceeds to the maturity phase, we make the final ordering for the unordered subblocks in the partition PI (possibly some of them remain in the same order). When we reach at the exit point of a maturity phase, we already obtain the perfect set of orbits and it is satisfiable for the result of an Isomorphism Experiment. The reason why we continue the experiment is just we are going to output the matrix PSI number for a matrix invariant. Hence C5 is clearly fulfilled.

Note: The orbits are linearly ordered, hence we can get a canonical ordering. Some unordered subblocks may still remain at the exit of the maturity state, but these subblocks are not orbits. In other words, the unordered partition PI gives us a set of CRYSTALs and the ordered partition K gives us a set of orbits, while an orbit is always SYMMETRIC but not necessarily a CRYSTAL.

==========================
< Theorem > Pending...
==========================
==========================
< Proof > Pending...
==========================
========================== Conclusion ==========================

As we have not yet got the complete proof, we should not speak too loudly. So let's make a small talk regarding our configuration and what comes next. I regard that Graph Theory is a sort of physics, ie, a part of natural science and this is what I love the most in the theory. On the other hand, I dare to say that Complexity Theory is nearly falling into a theology, while Topology is all but a religion, very sorry! Our stance is by an experimental physics, and we heavily rely on the computing power of our machine. As is mentioned at the following Technical Note, our computing power is still too small than our desire. (We need a 128 bitwidth machine just now.) However our method comes to reality for the first time with recent industrial evolution. This must be emphasized as our method is just the first example to the direction where graphs are measured in a very strict way.

I think that this situation is somewhat resemble the case when Descartes established the Analytical Geometry. Perhaps some mathematician had regretted viewing that the loyal geometry became to some extent mechanical manipulation on a dry and abstruct coordinate system. I confess myself I do, as I like drawing a graph on paper and puzzle out it. But I think that this direction is almost unavoidable for future graph theory. Perhaps the first step may be to discover a way to seek the minimum PSI number as an absolute graph invariant.

In our (expanded) definition, a matrix M is a function f:XxY→Z, hence M is regarded as a binary operation, thus an algebraic system. In such a view, group is considered to be a subclass of matrix such that it is an onto function f:XxX→X and there exists a point e such that f(e,x)=x and some point y \in X, f(y,x)=e for all x \in X; further for all x,y,z \in X: f(x,f(y,z))=f(f(x,y),z). Suppose a normal subgroup H of G and a coset partition PI by H. Then the partition PI sections M normally with respect to the distance measure, ie, M is metricative with the partition PI, where M is the matrix representation of the group G. (This is our conjecture, but maybe provable. See P7 below.)

It implies that our normal partitioning, or the metricative matrix is said to be some kind of extension of the concept of normal subgroup in Group Theory or at least very close to each other. Although there exists a subtle difference between cosets and our polymetries. In a word, our partition is finer or stronger than the coset partition as our left polymetrie and right polymetrie are in the same cage of the matrix, while a left and a right cosets are apart from each other in different cages placed at diagonally symmetric position.

In the view of matrix, group is a very particular object and the difference is great. Our matrix has no unit element nor reverse element, and a cosets is a partition of the set X with the same size, while our normal partition directs to orbits with various size. However the flavors are surely alike. The reason why the Delta matrices have their left and right half matrices is the same with the case of cosets. It is said that when Galois introduced the concept of normal subgroup, at last he reached at the depth of the algebraic equation property and was able to prove insolvability of quintic or higher equations.

Then what will be our next application? Yes, of course it shall be applied on our long outstanding problem, NP-complete problem. Imagine a Hamiltonian path, this is the longest path in a graph and to be considered having the largest PSI number. If we can compute the minimum PSI number in polynomial time, then we would compute the PSI numbers of every G-e subgraphs and erase an edge which took the least value. Repeat this until we get a simple path. It takes just O(m^2*T) time, where m is the number of edges and T is the unit cost to compute a PSI number. Of course things are not easy going like this, but imaginable that we've got a very strong tool which we had never possessed.

A bottom line comment: The insolvability of quintic equation by Galois may be interpreted in three ways, (1)there exists some highly PARASYMMETRIC samples which cannot be solved in polynomial time by our method, (2)to compute the minimum PSI number of a matrix in polynomial time is impossible, (3)NP-complete problem is negatively proven along the line. How do you think about?

==========================
         Exercise
==========================

Ex1. Prove or disprove the following propositions.

P1. Suppose nxn matrices M and M' and let P* and P'* be the set of all matrix PSI numbers of M and M' respectively. Then M is isomorphic to M' iff there exist PSI numbers P \in P* and P' \in P'* such as P=P'.

P2. Suppose nxn matrices M and M' and let P* and P'* be the set of all matrix PSI numbers of M and M' respectively. Then it comes, PSI number sets P*=P'*.

P3. Let M be an nxn matrix on a set X sectioned by a partition K, let C be a birdcage AxA, A \in K. Suppose for all x \in A: polymetrie P!(x,A) is the same and also polymetrie P!(A,x) is the same. Then for all x \in A: P!(x,A)=P!(A,x).

P4. Suppose a cage C=AxB of a metricative matrix M sectioned by a partition PI, a left polymetrie P![A,B):(w,l,m) and a right polymetrie P!(A,B]:(w',l',m') in the cage C. Then the polymetries have the same set of isometries, ie, they have the same number of isometries (w,v,c) with the same weight w and the same cleavage c, but the valences v are possibly different.

P5. Let M(X^2) be an nxn matrix onto X, ie, for all x,y \in X: M(x,y) \in X. Then M is transitive iff for all x,y,z \in X: M(x,M(y,z))=M(M(x,y),z). If for all x,y \in X: M(x,y) \in X, then M is closed. Let G be a group acting on a set X and M(X^2) be a matrix representation of G, ie, M is transitive, closed and for all x \in X: there exist a point e and some point y in X such that M(e,x)=x and M(y,x)=e. Then M is IRREDUCIBLE.

P6. A Latin square is an nxn IRREDUCIBLE matrix M(N^2) such that the valence of every isometries of M is one. If a Latin square M has an unit element and every point in N has its reverse element, ie, for all x \in N: there exist a point e and some point y in N such that M(e,x)=x and f(y,x)=f(x,y)=e, then M is a matrix representation of some group G.

P7. Let G be a group acting on a set X and M(X^2) be a matrix representation of G. Consider a normal subgroup H of G and a coset partition PI of X by H. Then PI is a normal partition of M with respect to the distance measure, in other words, the matrix M is metricative with the partition PI.

P8. A SYMMETRIC matrix is IRREDUCIBLE.

P9. A SYMMETRIC matrix is SUBSYMMETRIC.

P10. A CRYSTALLOID matrix is SYMMETRIC.

P11. Let M(X^2) be an nxn PARASYMMETRIC matrix. Then there exists a pair of points p and q such as D(p)=\=D(q), where D(p) is the matrix Delta number of the split M|p and D(q) is the Delta number of the split M|q assuming that the evaluation is done segregatively.

P12. Let M(X^2) be an nxn SYMMETRIC matrix. Then for all pair of points p,q \in X, D(p)=D(q) when evaluated segregatively, where D(p) and D(q) are the matrix Delta numbers of the splits M|p and M|q respectively.

P13. Every orbit of the automorphism group of a matrix is SYMMETRIC. In other words, let A be a point subset of M and assume A is an orbit. Then the submatrix S=AxA is SYMMETRIC.

P14. Every strongly regular graph is IRREDUCIBLE, ie., let M be any matrix representation of a strongly regular graph G, then M is IRREDUCIBLE.

P15. There exist no graphs except strongly regular graphs of which matrix representation is PARASYMMETRIC.

P16. Let M(X^2) be an nxn matrix and Z be a Delta matrix of M. Then there exists a homomorphism f from M to both Z.x=M'(X^2) and Z.y=M"(X^2).

P17. Suppose two nxn metricative matrices M and M' and their Delta matrix Z and Z' with partitions K,PI and K',PI' respectively. Let the matrix Delta number of M and M' be D and D' respectively and make the sections H=M\PI and H'=M'\PI'. Then D=D' <=> Z=Z' <=> H=H' <=> M and M' D-isomorphic. (Cf. Ex6.)

Ex2. Regarding polymetries, there exist a distinction, left or right. Why can we neglect the distinction in the case of isometries?

Ex3. Show that the following is a fact: suppose a cage C=AxB,|A|=l,|B|=l' of a nxn metricative matrix M(X^2), the left polymetrie P![A,B):(z,l,m) and the right polymetrie P!(A,B]:(z',l',m') in the cage C. Let a corresponding isometries of the left and right be L(w,v,c) and R(w',v',c'), let GCD be the Greatest Common Deviser of l and l'. Then w=w',c=c', lv=l'v', and m=m'<=GCD.

Ex4. The note at D13 showed that a trivial block distance d[{p},{q}] is always available. Now show similarly that point to block/block to point distances, ie, d(x,B) and d(A,y) are always available.

Ex5. Show that if partition K is total, then Z_ij.x=Z_ij.y=d(i,j)=M_ij.

Ex6. Show that if partition K is total, then the point Delta number D[i] of a Delta matrix Z coincides to the point PSI number P[i], as well the matrix Delta number D of Z coincides to the matrix PSI number P of the matrix M, in other words, D[i].x=D[i].y=P[i] and D.x=D.y=P.

Ex7. Show that the section H defined at D18 is a homomorphic image of a Delta matrix Z defined at D17.

Ex8. How does the trivial section H* defined at D18 relate to the trivial automorphism of a matrix.

Ex9. Consider a group G acting on a set X and its matrix representation M(X^2). Since M is a matrix, it has its own automorphism group A. It seems that every matrix representation of symmetric groups and permutation groups has only one trivial automorphism of its own in this sense. Does there exist a group G of which matrix representation M has a non-trivial automorphism group A as |A|>1?

==========================
        Reference
==========================

[1] Brendan D. McKay, nauty, http://cs.anu.edu.au/~bdm/nauty ,2000
[2] Boris Weisfeiler, On Construction and Identification of Graphs, Lecture Notes in Mathematics, Springer-Verlag, 1976, MR58#27590. Available at, http://groups.yahoo.com/group/theory-edge/files/dgdegiorgi/book.pdf
[3] D. G. Higman, Characterization of families of rank 3 permutation groups by the subdegrees I, II, Arch. Math. 21 (1970), 151-156, 353-361. 2
[4] Chung Laung Liu, Elements of Discrete Mathematics, 2nd Edition, PP 151 (in Japanese), McGraw-Hill Inc., 1985.
[5] 16-64 vertices Strongly regular graph samples in zero-one matrix format http://groups.yahoo.com/group/theory-edge/files/Michiro%20Nasu/Strongly.zip Files were copied from http://gauss.maths.gla.ac.uk/~ted/ in 2002.
[6] Michiro Nasu, Elsie.exe, Provable Polynomial Time Algorithm for Matrix Isomorphism, 2004
http://groups.yahoo.com/group/theory-edge/files/Michiro%20Nasu/ELSIE.exe [7] Michiro Nasu, Narkissus.exe, Narcissus: Graph Isomorphism Experiment, 2004 http://groups.yahoo.com/group/theory-edge/files/Michiro%20Nasu/Narcissus.exe

==========================
      Technical Note
==========================

Hi D.... and all!

here I want to left a small note for the technical matters. as you know CS digs the depth level 1, KU:=CS^2 digs level 2, and NC:=CS^3 level 3. all through the experiment, in the center of the multiplied loops, CS rotates at a high-speed like a gas-turbine of a jet-fighter. accordingly it was the most important to examine the correctness of the procedure. the high pressure compressor for the turbine must pass a long and strict test being placed in a wind tunnel. but the test itself was not easy.

if I say that we wrote ten times amount of program just to trace the behavior of the program itself, it may be a bit exaggerated, but a half is true. in fact it was almost/absolutely impossible to trace it with my eyes. saying from my experience, 25-graphs are the limit to trace with bare eyes. this time even 16 was far beyond the limitation.

Elsie was written in C++ on Microsoft VC++ environment, and we provided several classes for huge numbers, arrays and matrices. some arithmetic operators are defined for the classes, including comparison operators. to compare two matrices M0 and M1, we only need to write "if (M0 > M1)". the huge number is expressed as a combination of a long double D and an unsigned long integer L. the bit widths of D and L are both 64 bits. (before we found that our problem was the Matrix Isomorphism, the huge number class had held values for both the graph and the complement, and for example '+' operator acted simultaneously on both G and the complement of G.)

as you pointed out correctly, our resource for the computation is quite restricted and miserably short for the full satisfaction of the formal definition of the huge numbers. our tactic is just to simply avoid the over flow. speaking broadly, we even dont care the precision of the numbers. what all we need is just that they are calculated *deterministically* and hold at the least valid bits for comparison.

it may be looked as a cheap trick, but an inevitable choice for us and it is working as we expected. however this is great in a reversed view, ie, the fact that the numbers calculated in such an easy way agree at every moment is so to say one more evidence that the program is running in truly deterministic (label-independent) way with zero error. we have two ways to avoid the over flow of the huge numbers.

(1)Compression of the numbers
(2)Reduction of the numbers

when an array/matrix of huge number is given, at first we compress the numbers into a set of numbers {0,1,...,m}, where m<n^2 is the largest number in the array. this compression is done taking every opportunity when the numbers are computed. the exponential basis X and Y for the computation of Delta/PSI numbers are selected just larger by 1 than the largest within the numbers. consequently the basis X and Y are said to be floating all through the experiment, but this does not effect the result. accurately the basis X and Y are decided as the power of 2 to avoid the accumulation error.

the second Reduction is applied to the integer part of the numbers. arithmetic is mostly/only addition and multiplication. the 64 bits for the integer is satisfactory for our purpose and we expect that the over flow at the highest-bit in computation shall be neglected. however the curst MS compiler lets it return 0 when over flow occurs. this is very inconvenient for our purpose. so we could not use the full 64 bits for the integer part. we cut the number into about a half bits and give it to the calculator. if the lower half bits are all zero, then we use the upper half bits. this is the meaning of the Reduction (2).

you may have a question how do we maintain the partitions, for at a glance maintaining the partitions seems not so easy. however it is done in very simple way. we maintain two integer vectors K and PI. for example, K={1,2,2,2,3,3} means that |M|=6 and M is partitioned into 3 ordered blocks {1},{2,3,4},{5,6}. to update the partition is easy, just take another vectors V{0,1,2,2,0,0}, and make an addition V+K*n. then we have {6,13,14,14,18,18}→{1,2,3,3,4,4}. the second block {2,3,4} is refined into newly ordered two blocks {2},{3,4}.

once the order of a block is determined, basically we do not move the position any more even if the block is partitioned into subblocks. however if the subblock is upgraded to an ordered block, then it is placed in the order of the size of blocks. consequently the determined points (trivial blocks) are always at the left most of the partition K. since the points are not necessarily in the order of the PSI/Delta numbers of the points, we do not claim that the matrix PSI number is the minimum, rather call it the reference PSI number.

sorting appears everywhere in our program, especially CS is said to be a clustering of sort as I showed previously. the sorting algorithm we adopted is a most simple one as we believe that it is mostly effective for a small data set like under 100 records. the algorithm to transform an adjacency matrix to a distance matrix is due to Floyd, a well known simplest O(n^3) algorithm. thus our program consists of just the easiest/simplest tools. I already wrote wrt the polygram and PI-tree. although it is not that there is nothing to tell any more, looking back, there was little to take up in our program in technical point of view... anyway thanks for your help.

M.N.

ps: if there exists a request for the source code of ELSIE, I'm willing to release it at our archive on some suitable time point. (9/23/2004)

Hi D.... and all!

it seems that some people took my wording wrong and despaired with our stance to mathematical rigorism. it might be that they stumbled at the line, "speaking broadly, we even dont care the precision of the numbers". please dont take my saying literally. perhaps I have to explain the steps how carefully a Delta matrix is built up and how short time the matrix is alive.

as I wrote in my previous mail, we compress the numbers in a very bold way. however this Compression is effective only within a very restricted and short time span, hence absolutely safe. Compression is applied when a Delta matrix is built up. a Delta matrix Z is built up at *every* cs-slice from the original matrix M referring to the *last* updated ordering O, the partitions K and PI, and garbaged entirely by the next cs-slice, where a cs-slice means an *inseparable* unit process in the Critical State Experiment and the parameters O, K and PI are *never* changed within the slice, substantially a single layer of the multiply layered loops of CS.

Reduction is used for the computation of Delta/PSI numbers. these numbers are calculated so to say as one shot of the experiment and never effect each other, ie, calculated independently. the Delta numbers are mainly computed to give an order of points and blocks. since our arithmetic is obviously incomplete, the *absolute* order is less-meaning. (this is the one more reason why we dont call the PSI number the minimum PSI number, instead call it the reference PSI number.) however for our purpose this is enough as what we need is just the *identity*, i.e., the information if A=B or not. consider we have 10-digits- decimal-calculator and given two numbers 1234567890 and 7890123456. now we compute 1234567890*7890123456=9740893066913427840 and the calculator returns a 10 digits decimal

6913427840.

this is a desirable/designed result for us. however actually we have to compute a-half-digits numbers 67890*23456 and get

1592427840.

this is not at all the best but acceptable for us. if we give a binary expressed numbers to the calculator, the result would be much better as we can avoid the accumulation errors. this is a simplified simulation what we call Reduction, and we think that it is sufficient for the aim of ELSIE as a conceptual/demonstrative prototyping of our algorithm. certainly the number obtained is so to say a fake, but at the same time an intrinsic identity with no doubt. as far as our purpose is to determine *identities* for n objects (points/blocks), it is as enough as the Compressed number {1,2,..,m} is enough for the Delta matrix computation.

maybe we've lost a considerable time only for tuning arithmetic tools. if we built in a multi-precision arithmetic from the beginning, there might be no such difficulty. however if the computation takes a day par sample, perhaps we could not reach the present point forever. for the debugging we need at the minimum 20 minutes/sample efficiency. anyway these arguments are non-essential on the view point of the proof, of course. (9/23/2004)

Hi D.... and all!

...snipped

I want to leave a note w.r.t. the *polygram* and the *PI-tree* for a part of the log of our long sailing. although they are removed from our program absolutely, but they were very effective for verifying the correctness and consistency of the algorithm.

a polygram is a number which represents the history of a session of the Critical State Experiment. actually it is a sum of some magical numbers like "18312", where it means that the matrix is partitioned from 1 block to 8 ordered blocks and 3 unordered blocks to 12 in the Omega Decomposition or Isometrie Cleavage. earlier we maintained such records in string format. but it was simply impossible because the occurrence of the splits are random and label-depending. as I reported, it was the very last moment when the polygrams came to completely agree to the Omega numbers.

as well, we maintained a weighted tree named PI-tree which also represents the history of the partitioning process. a node p of the tree is either an ordered block or an unordered block. when the block p is divided, we add branches to the node and place the new blocks under the node. each node is given a *rational* number as its node weight which represents the path from the root to the node in some way (not at all magical). then the tree weight is computed as the sum of the node weight.

of course the PI-tree weight must exactly correspond to the Omega numbers. further we were concerned if it happens that the PI-tree may differ from the ordering, ie, we considered that it might offer some new information wrt the ordering. if it is, then we have to rewind the Critical State. however we finally found that there existed no discrepancy between PI-tree and the Omega numbers and the ordered/unordered partitions.

these are the evidential facts to strengthen our claim that the algorithm (and its realization) is truly deterministic. now we are testing the series of the strongly regular graphs with 35 vertices. I dont know when it halts. it may take ten days on my computer (Windows XP, 1.2GHz, 128MB RAM, 20GB HD)... (18/9/2004)

==========================
       Elsie Manual
==========================
DOWNLOAD: Elsie.exe, Polynomial Time Algorithm for Matrix Isomorphism
DOWNLOAD: Narcissus.exe, Graph Isomorphism Experiment 

Hi D.... and all!

I just released the revised Elsie and Narcissus at our archive. they work with max 65-undirected-graphs. directed graphs would be canceled but disconnected graphs are OK (including totally disconnected graphs). the matrix base is Distance Matrix, ie, it reads an adjacency matrix file (or make a random graph),transforms it into a distance matrix, generates a partner (if necessary) by shuffling the matrix, and starts the experiment. samples are always saved in temporary files, example0/1.txt (Elsie) or narkissos_1/2.txt (Narcissus). at the end of each experiment, it outputs a simple one-line record, each field separated by a tab, to the log-file"experiment.log" in the format:

PSI-number normality (n,b,c) axes n m file-name date-and-time

where (n,b,c) indicates the partition K and PI at the time when the first NutsCracking was executed, n=matrix size (the number of the points), b=number of ordered blocks, c=number of unordered blocks, m=number of edges and normality, like CRACKPOT, INNOCENT etc, is the normality of the matrix. if '+' is at the head of the normality field, then the sample is a strongly regular graph. in such a case, Elsie tells you so on the screen showing the parameter (n,d,l,m), where n=order, d=degree, l=lambda, m=mu.

*axes* is the number of the breeders arbitrary determined. you would see (n,b,c) similar to the above very often on your screen. if it is in (n,b,c,d) form, then d=c-b. at the end of each (half)experiment of a matrix, orbits of the group would be displayed.

Narcissus can read a zero-one matrix file (even multi samples separated by a null line) and a sample list file (batch file). further it has an option to test all the k-regular graphs in the range of 14≤k≤16 automatically. Elsie can read arbitrary *general* matrix sample files or generates random matrix samples as before. if Elsie is executed with an argument like

>Elsie sample-file

then it computes the PSI number of the sample silently and just outputs it on the console. the working directory is always where the executable was placed.

...snipped

we've not yet tested the 36 vertices series (227 samples) and 64 vertices series(167 samples). 40 vertices (12 samples) were yet cleared. next I'll try 36 vertices but maybe it would go about similarly to 35 series. so it is sure that to clear them is a matter of time. the most urgent problem for us at present is the 64 series. although Narcissus is able to read 64 vertices in the specification, it seems simply too heavy to carry out... if anyone has a high- speed machine and some free time, please test the 64 series. any question and suggestion would be appreciated.

Michiro

ps: apart from our current labor to establish a provable ptime algorithm for matrix/graph isomorphism, we direct our attention to what we call axes or the number of symmetry axes in somehow theoretical view point. the number of the symmetry axes is a matrix invariant in a strong sense, ie independent from the matrix representation base. maybe due to my ignorance around the domain, I've heard nothing about this matrix/graph invariant, while it appears in our experiment clearly and I wonder if it has its own significance in the theory. dont you know anything?

the number of symmetry axes is of course not at all the number of orbits nor anything else. it represents the *freedom* of the matrix/graph in a topological space. in other words, it may represent some topological *constraint* in the reversed view. do you think that we need such a topological knowledge for the proof of our algorithm? anyway for a successful experiment with our method, the most important but not-well known factor is this.

regarding the potential mistakes of Elsie: I think that logically the first case possibility is of zero probability. if Elsie fails by the first reason, then it must be a bug. because as far as I've checked the algorithm, there exists no hole for the ordering method which causes such a first mistake. ie, the *deterministic* part works as designed. however the second case is to be considered seriously. ie., the selection of an *arbitrary* breeder may cause a trouble. it means that Elsie failed to find the symmetric axis mentioned above.

if a monster exists, then it appears in this case. all of what we can say is that the monster must be relatively large. so it is very hard for us to detect one in our poor environment. our experiment descends down to the depth of the level three. (taking account of the Wake Up Innocent, it may be level 3.5.)

CS: CS^1, depth level 1
KU: CS^2, depth level 2
NC: CS^3, depth level 3

if a monster exists in a deeper level than three, we have to abandon the polynomial time solution. but at the least we can say clearly that it is no deeper than logn level. this is the bottom of our problem, ie, the upper bound of the complexity.

I think that the depth level three is very critical. as far as I read, it seems that Boris et al took the same strategy with the depth level 3. I guess that it is somehow relevant to the fact that every strongly regular graphs are diameter two. (incidentally it is known that almost all graphs are of diameter two.) in this sense, we have a strong interest in a question if a non-strongly-regular CRACKPOT exists or not.

BTW despite of my saying in my previous mail, I think that the naming "geometric" is not suitable for our "distance matrix". when you consider our matrix space, you had better imagine a galaxy but not a geometrical map on the earth. because we do not assume any concrete edges nor paths between the points. thus our "distance" is not at all geometrical one, rather it is "astronomical" or something like that. I dont yet fix the naming, but "parametric matrix" may be a candidate... (9/20/2004)

=====================================================================
Date: Oct 9, 2004
Author: Michiro Nasu
Affiliation: Baba Laboratory
Address: 1-3-72-2H Inari-cho, Fukaya, Saitama pre., Japan 366-0026
TEL: (81)048-574-3623
E-mail: nasukun@aya.or.jp
URL: http://www.aya.or.jp/~babalabo
=====================================================================