Mystic Pizza

Norm Nason and Paul Almond, over at Machines Like Us, have managed to pull quite a coup and conduct a long and fascinating interview with the philosopher John Searle, on his Chinese Room argument and others.

As anyone who’s read my books may have surmised, I don’t agree with all of Searle’s arguments and I don’t share his disbelief in the possibility of Strong AI (even though I doubt very much that a digital computer is a practical medium for such a thing, long-term). But rather than discuss it here I’ve posted a long comment on the original site. It’s too big a subject to tackle in a blog post really, let alone a comment to one, so maybe I’ll have to write another book. I can’t make up my mind whether I next want to write a book called “Machines like us” (Norm borrowed the title for his site from one of my talks), about mechanism and the human condition, or whether to write one about “Un-physics” – a more general elucidation of a process-oriented view of nature, the behavior of complex feedback systems and self-organization. Does anyone care either way? I don’t suppose so.

Anyway, Paul’s excellent interview with John Searle can be found here, and my somewhat inept and hurried attempts to put forward an alternative view are here. Enjoy.

Advertisements

About stevegrand
I'm an independent AI and artificial life researcher, interested in oodles and oodles of things but especially the brain. And chocolate. I like chocolate too.

18 Responses to Mystic Pizza

  1. Michael O'Connor says:

    Excellent response. I feel that your first point of disagreement there is especially important, as it seems to be the place where a great many people get stuck, assigning too much value to our own elementary particles and physical laws simply because we can’t see the code that runs them.

    I would very much be interested in both of those books. You have until noon tomorrow. Chop chop.

    On second thought, don’t you dare start writing a book if it is going to lengthen the amount of time before I am able to play around with simbiosis. You do not release virtual worlds nearly often enough for my liking.

    Seriously though, love your work.

    • stevegrand says:

      Thanks Michael!

      > I would very much be interested in both of those books. You have until noon tomorrow. Chop chop.

      Damn, I’ve missed the deadline already! But on second thoughts you’re right – I should finish the game first.

      Cheers,
      Steve

      • Michael O'Connor says:

        For what it’s worth, I think I’d rather read “Un-physics” first. I imagine it would focus on some of the ideas I liked best in “Creation”, but in more detail, and that can only be a good thing.

        But if it came down to a “only one of these books will ever exist, choose now!” sort of scenario, I’m not sure what I’d do. Probably panic and start crying.

      • stevegrand says:

        🙂 Yep, I’d like to expand on the question of what is real and how things come to exist.

        But I’m taking your first advice first – get the damn game finished.

  2. Daniel Mewes says:

    I am just thinking…
    A human’s life is limited, so we can assume that the number of (e.g. Chinese) words one can “process” in their life is finite, too.
    This however would make it possible to indeed write a program that gives appropriate answers to any possible sequence of Chinese sentences (since mind ist stateful, sentences can of course not be processed isolated) simply as a giant if … else if … structure. Every time you “speak” to this program, the sentence may be added to some internal memory, so that the next time that you are asking the same question, you may get a different response (the conditionals in the program would thus not only compare the “current” sentence but a concatination of all the sentences the program has gotten so far).
    It should be possible that – at least within the “Chinese room” – such a program really behaves exactly like some human.

    The question is, would such a machine have a conscious mind? I am personally thinking yes, but I may miss some point.

    Of course the number of possible (and taken) states of our mind is not finite (as it works truly analogue), but does this make a qualitative difference? (I think no)

    To come back to the question whether an if-then program could have a mind: in the end the machine proposed above really has to incorporate nearly every single neuron’s state of the human mind in some or the other way (as well as the concenctrations of neuro transmitters, hormones and other chemicals at different locations), because it is quite simple to see I think that otherwise it could not give adequate Chinese answers in every (!) possible situation. Thus memorizing the sequence of past Chinese sentences is just a different representation of biological state (with a good compression ratio actually, although the giant program easily erases that benefit 😉 ).
    So the difference between “real” physical brains and simulated ones may just be a difference in the representation of its state.

    PS: Ok, one should also add some timer to the state of the machine (and honor that timer in the conditionals), since not only does the sequence of Chinese words matter, but also the time, since humans really are time aware. 🙂 (a human might even fall asleep if you wait too long with the next sentence and don’t give an answer at all!). Sure this timer would have a limited resolution, but then again I do not think that his makes a qualitative difference in the end.

  3. stevegrand says:

    Hey Daniel,

    Small world – I’ve only just this second replied to your post over at grandroids.ning.com!

    I think Searle would say, and I think I would agree with him in this respect, that such a system DOESN’T really understand Chinese and hence there’s no real THINKING going on. It may know how to say that noodles go well with duck but it doesn’t know what that MEANS, at least in part because it can’t starve.

    Individual humans think and say a finite number of things but they COULD say and think any fraction of an almost infinite number of things. So at the very least, any system that tries to imitate intelligence by reading some inputs, looking them up in a table and spitting out some output is going to be pretty bulky. I estimate that a lookup table capable of directly converting any English sentence into any Chinese sentence (even without remembering what was said in the previous sentence) would be at least 60 trillion terabits long.

    But I think you touch on an important point by suggesting that such a lexicon is computationally equivalent to a biological brain that would utter the same Chinese sentences in response to a given set of English equivalents. This is the argument of computational functionalism, which says that the substrate doesn’t matter, as long as it gives the same outputs for the given inputs. This is what Searle is arguing against.

    My own view is that Searle is wrong because he is right.

    I say that there CAN BE alternative computational structures that really understand Chinese, as long as those structures look, at some deep but not truly fundamental level, like the human brain, and this brain is connected to a body that looks, at some deep but not truly fundamental level, like a human body, and this body has been brought up in China. Or at least somewhere that approximates, in some deep but not truly fundamental level, China.

    I’m suggesting that the “physical reality” of the fundamental substrate is not important, as long as the computational and dynamical mechanisms that emerge from it are equivalent to those that exist in nature. In other words thought is a phenomenon that emerges from the interactions of neurons and the world, and it doesn’t make any philosophical difference whether that world and those neurons are real or simulated.

    But you probably need something substantially like neurons wired up in something substantially like the way the brain is wired, and certainly you need a world. Where I think Searle is correct is in fighting the long-held assumption that just about any old representation will do, and that understanding is nothing more than symbol manipulation.

    So I don’t necessarily disagree with you that a symbol manipulating program with qualitatively the same behavior as a real Chinaman would really be thinking in Chinese. What I disagree with is the premise that you can actually HAVE such a thing without that program looking remarkably like a brain, embodied with sensors and actuators, and situated in the world.

    It’s easy to see that a book of phrases for visitors to China makes a fair stab at behaving like someone who understands Chinese, up to a point. But only up to a point. From there onwards it gets exponentially harder. like I say, sometimes it’s easier, instead of trying to build something that looks like a duck, quacks like a duck and does a million other things like a duck under every possible circumstance, simply to build a duck. I don’t think there IS a book that can flawlessly translate any possible Chinese phrase into English, so the Chinese Room experiment is a bit of a cheat. I don’t think there are any direct algorithms that can emulate general thought processes, unless they emulate the brain at some deep level. Weak AI therefore has its limits, but Strong AI isn’t necessarily impossible.

    But this is SUCH a deep question that I’m sure I’ve said something really stupid that I’ll regret in this post alone, let alone the others. It’s hard to get your head around.

  4. Daniel Mewes says:

    Yea, already saw your response there. Thanks to the Internet both sites are just a click away. 🙂

    I don’t want to go through each single point of your response individually, but rather point out a few things:
    1. I think I may not have made my idea clear enough. Of course I was talking about a purely theoretical program running on an abstract machine with a giant memory. Even if only one sentence can be “feed” into the Chinese room once every minute (and let’s assume for simplicity that there is only one really short moment in every minute, in that it is possible to feed a sentence in), and if there were only 10,000 possible sentences, that would come out to 10000^(60*24) possible sequences of sentences that may have been feed into the room in just a single day (yea, that’s power). For sure such a machine or program can never practically exist.

    2. Regarding “What I disagree with is the premise that you can actually HAVE such a thing without that program looking remarkably like a brain, embodied with sensors and actuators, and situated in the world.”, I totally agree with you when it comes to the real world, but in theory the program I proposed is just such a thing!

    so 3. What the thing comes down to I think is the question whether representation matters for the question if there is a mind. I already said that the if/then program I proposed really incorporates most if not all the state that a real person’s mind would have.
    Since the Chinese room is some kind of a black box, one does not necessarily need to have any sensory, actuator or environment simulation. Everything that happens inside this room is either deterministic or – when it comes to quantum mechanics – it at least is not distinguishable from the outside whether a deterministic process inside the room generates the Chinese answers or whether random quantum mechanical processes are going on there (as long as there are no hidden channels). So the state of the room really *only* depends on the sentences one gives to the room and at what time one gives them. No matter if an actual Chinese man answers the questions or the proposed if/then machine.

    It’s just that the state information is represented in a different form, either in form of a sentence-timestamp backlog in a computer memory or as the state of a neural network together with the physical state of the Chinese man in the room (and to some extent the state of the room itself, like air temperature and so on… The if/then program would of course honor physical state changes caused by any previous sentence for its ongoing answers, too).

    So, does existence of a mind depend on the physical representation of one and the same state? I personally do not think so, but don’t have good proof for this at the moment.

    4. This actually is an additional thought, but introduces a different (IMO quite interesting) view of the problem:
    Let’s say one actually wants to write an if/then program as proposed. So for every possible sequence of sentences, the programmer has to find out what an actual human would answer, so the program can later respond just like a human would to. The human’s answer has then to be put into the program.
    One approach is to simply build 10000^(60*24) rooms, place one out of a set of completely equal Chinese men with equal states of their minds and bodies in each of those rooms and then confront each of them with one of the sentence sequences (of course one sentence within a given sequence after another). Of course it is impossible to even get two people that are in the same state at any given point in time. Even more, due to quantum mechanics, the people in the different rooms would not be in the same state any more after a short time, even if the sequences of sentences had not even differentiated yet (and that would lead to inconsistent answers between the different people => the if/then program would have insufficient information available to pick one of those later).
    So the more “practical” approach would be to design some deterministic simulation of a Chinese man, e.g. using virtual neurons forming a virtual brain that lives in a virtual body within a virtual room. This way one could simply create the required number of copies and run those against the sequences.

    (and finally here comes the interesting point, especially if we use the real people I think!)
    Any way we are doing it: In order to generate the if/then machine, to every possible sequence of sentences that may occur, one or the other Chinese being (either virtual/simulated or real) has actually undergone all the sentences that we may later feed into the Chinese room (in which our if/then program will run by then)! So what we essentially do is some kind of time shifting. We shift every single possible sequence of occurences from the future (when we will converse with the Chinese room) to the present (when we generate the program using the proposed method). We then conserve the responses in order to replay them in the future, where the if/then program can then decides which of the sequences it has to follow and thus which answers it has to replay.

    So there again the if/then program together with its backlog memory IMO just is a different representation of a mind. At any given time it represents exactly one of the (virtuals) people’s mind at the time of development of the program.

    Well, I am not sure at the moment if this really helps regarding the question whether the if/then machine has a mind of its own or not. But then again we just took the sentences from the time of interrogation to the time of the machine’s development (by simply going through all possible “futures”) and then back to to the time of interrogation, by incorporating the answers into the if/then program. So mind “in” the if/then program exists for sure, it just exists at a different place in space-time (then again one may say that the copy of the mind’s state in the if/then machine may be incomplete to some extent).

    Well, I already got far beyond what my very own mind is able to handle, at least at the moment (it’s nearly midnight here in Germany now), so before producing more thoughts that lead to nowhere, I will better stop writing here. 🙂
    Hope I do not keep you away from doing more important things by keeping you occupied with such long comments…

  5. stevegrand says:

    Phew! Once John Searle comes up in conversation, it never seems to stop!

    > For sure such a machine or program can never practically exist.

    Mmm, but there are serious risks when forming logical arguments from something that is possible in principle only. Especially if infinity comes into the equations at some point. I wasn’t so much disagreeing with you as John Searle.

    This hypothetical book of translations is being compared to the behavior of a thinking being (only a small part of the behavior of a thinking being in Searle’s experiment but a a much larger part if you extend it to your chatbot-type scenario). Now, thinking beings do exist in practice, so it’s dangerous to compare them to something that exists only in principle.

    It’s related to the Turing Test: people often wonder when a machine will pass the Turing Test, but the Test CANNOT be passed, only failed. A random number generator will fail very quickly, a really good chatbot will fail more slowly, but nothing can ever be said to have passed, because it has to keep on convincing you it is human INDEFINITELY.

    The chatbot-like scenario you are talking about (where the device has to respond to your questions and comments, rather than merely translate them into another language) has to be able to emulate intelligent human behavior forever before you can say that it exactly matches the output of a real human. Real humans really think, and therefore can be creative. They can say things nobody else has ever said, invent languages nobody has ever heard, solve problems never solved. How are you going to copy that using an IF/THEN table? Just because you could do it if the table was infinitely long and you had infinite foresight doesn’t allow anyone to use such a black box in a deductive argument. I don’t think so, anyway.

    It’s like asking whether a multiplication table can multiply. Yes it can if it is infinitely long, but otherwise it’s only fooling you and eventually it’ll fail to answer questions that a calculator can always answer. Can an infinite multiplication table multiply? Yes, but so what? There can never be an infinite table, so what conclusions can you draw about it? In reality you’re comparing something real with something that you can sort of envisage but which couldn’t actually exist. Is that fair? I don’t think so. There’s a big gap – perhaps an infinite one – between “sort of envisage” and “draw reliable conclusions from”.

    A chatbot that can *usually* behave like a human being doesn’t think, because thinking just doesn’t go wrong in that way. An infinite IF/THEN chatbot that can, for example, claim to have discovered how the human brain works and then prove it to you, is definitely indistinguishable from a thinking being, but you can’t have such a thing, so what does it prove?

    In Searle’s case I have a suspicion that he’s requiring us to conceive of the perfect infinite translation machine when demonstrating to us that it’s indistinguishable from a human, then reverting to the imperfect reality when asking us to believe that it’s “obviously” not really understanding Chinese.

    At least I think so – I’ve spent all day discussing related topics on several fronts and my head hurts. Basically I’ve lost track of what I DO think now! Time to give up and have a glass of wine, I think. Yes, that’s something I can get my head around…

  6. Daniel Mewes says:

    Well, the infinity thing is why my initial thought that it was all based on was
    “A human’s life is limited, so we can assume that the number of (e.g. Chinese) words one can “process” in their life is finite, too.”

    No Turing test can run indefinitely. After a reasonable long time, say 150 years, the interrogator might consider the test failed, simply because the thing he is speaking with still gives answers, while a human would for sure be dead after such a long time. Actually this moment might come much earlier, since most humans would not chat for more than a few hours when participating in a turing test (but will walk away in order to get some sleep, eat something etc.), so every machine that attempts to pass it should not only give the “right” answers, but should also stop giving answers after some time. Everything else is highly suspicious.

    However all this if/then idea is not overly important to the whole argument, it’s just a specific example of a theoretically possible (because finite) machine, that per definition behaves exactly like a human being in every possible situation within the Chinese room. And it especially is a philosophically interesting machine IMO because of the (only possible) way one may create it (see point 4 of my previous comment).

    Because actual humans (or simulated ones, let’s not care about Searle for a moment here) were used in order to generate the responses of the (finite but giant) if/then machine, it really behaves exactly indistinguishable from a human being.

    This does not give an approach to AI or something, I just found it to be an interesting thought experiment when thinking about whether an if/then program may have a mind of its own.

  7. stevegrand says:

    Hey Daniel,

    Hope I haven’t upset you – my fight isn’t with you but with some of Searle’s logic (not all, I agree with a lot of what he says) and especially some of the extrapolations people have made from it in attacking Strong AI and defending Weak AI over the twenty years since the Chinese Room was first proposed.

    If you’re going to be pedantic about it then I still say the Turing test has to run indefinitely, since in order to convince you that the machine is really, definitely, absolutely human it has to die after 70-90 years and demonstrate that it never starts up again. Ever. But less pedantically I was just trying to show that the repertoire of a real thinking human is not circumscribed – we are not simply knowledge bases.

    My point is that the human brain can continue to generate NEW sentences and ideas. This is fundamental to what we mean by saying that a human *thinks*. Any machine that just regurgitates what someone else has told it is not intelligent.

    The only way an IF/THEN list could duplicate that creativity is if the people who program it have thought every thought that could ever be had, invented every invention possible and reported on every event that will ever happen (by which time intelligence becomes superfluous anyway).

    (You could argue that the universe itself is already that IF/THEN machine, since all the events that will ever occur are contained within its configuration. But that gets us nowhere.)

    Intelligence is more than the repetition of knowledge. Humans are more than expert systems. A mind is more than just a set of responses to stimuli.

    You’re right – it is an interesting thought experiment and has generated widespread controversy for many years now. I’m sure it’ll continue to do so.

  8. Daniel Mewes says:

    Don’t worry, you did not upset me. 🙂
    I actually totally agree with you in most points. My proposed machine really only works under several very strict limitations, which allow it to become finite. The Cinese room experiment may however allow the necessary restrictions.
    If I think over it again, the if/then program is just a different form of the translation book + program proposed by Searle. But the really interesting point in it in my opinion is the fact, that in order to create such an if/then program, the thoughts that are necessary to answer any possible questions actually have to be thought on some time by a different machine or by some human being. The if/then program just conserves the answers. I think however that this program than is not too different than other simulations of a mind. It just is kind of extreme.

    The idea behind the simple if/then program is to really do the work at time of programming, and nearly no work at run time. This makes the development way to complex however. A more practical solution may perhaps be artificial neural networks, where only some work is done at the time of designing and programming the network, while most of the thinking is carried out at run time. On the other extreme would be a giant physics simulation of the whole universe, in the hope that life evolves automatically inside this virtual universe. In this case the programming effort is quite low, while run time requirements are extreme!

    My point here: the question really is not if a machine is either pre-programmed or evolving. The right question in my opinion should be how much is it pre-programmed? There are a lot of levels in between.
    The next question then is whether the ratio of program time versus run time work makes a considerable difference for the question, whether the system has a mind or not. Given that the result is the same, I do not think so.
    If it does, where lies the tipping-point?

    Searle of course as far as I understand states that there is a general difference in the quality of things existing in the real world and things existing in a virtual world (possibly interacting with the “real” world somehow), so all the considerations above would not really matter for him and it is hard (if not impossible) to prove the opposite.
    I find your pizza analogon excellent by the way, that really brings it to the point!
    (a thing that I obviously am not able to do in this discussion)

  9. stevegrand says:

    Congratulations, you’ve just progressed through twenty years of AI philosophy in a day! 😉

    Where is the tipping point indeed???

    As far as I’m concerned, a knowledge base (the IF/THEN structure) is simply not intelligent at all. It just embodies the intelligence of its human designer. A lot of early AI was like this. Some still is.

    At the other end of the scale, the human brain is very free to learn and think for itself, but even the brain is “pre-programmed” to a degree, this time by evolution. So I agree that the cut-off point lies somewhere in the gray area. (It’s worth mentioning, though, that the “program” written by evolution doesn’t actually do the thinking, it interacts with the environment to create the design of the mechanism that does the thinking).

    Anyway, I think that where you draw the line *does* have implications for whether the machine has a mind. If it wasn’t learned by experience then it isn’t intelligence and has no meaning for the machine, so the machine can’t be said to have intentionality.

    But I think this whole general discussion about outward behavior misses the point in some ways. Having a mind doesn’t make us conscious of the world, it makes us conscious of an inner world. “I” exist inside a virtual world in my head. This world is very similar in its properties to the real world, and exists because it has grown out of my real-world experience. But it is somewhat independent of the real world. In my mind’s eye I can fly. I can build things I’ve never seen. I can work out what will probably happen in scenarios I’ve never witnessed. I can make plans, have hopes and dreams and fears that are independent of what’s happening to my body. It is this that I am conscious of, not the external world, and my consciousness still exists even when I’m not producing output.

    Now, a series of IF/THEN input/output rules does not create an alternative reality, as far as I can see. It does not give rise to imagination, and therefore it is not conscious. Searle would rightly call it a zombie – it (supposedly) behaves as if it is conscious but really there’s no-one at home, because home doesn’t exist.

    All the stuff I was saying earlier was to suggest that you couldn’t even really create such a zombie – that there’s no list of explicit input/output rules that can perform like a human can perform, because much of what conscious thought does is carried out using a *model* of the world.

    You could possibly add a few trillion IF/THEN statements to create something close to (but a pale imitation of) an internal model of the world, on top of the rules you already have for relating input to output. But still you lack the mechanism for making use of that model in order to be inventive and make complex predictions. Conscious thought requires both the data and the mechanisms for making use of it.

    My assertion is that consciousness (and certain aspects of intelligent thought) require an imagination. A machine without this internal model and its ability to run out of synchrony with reality can’t be conscious because it has nothing to be conscious OF or conscious WITH, no matter how much it has been programmed to tell you it is conscious. And if you quizzed it hard enough you’d soon discover it’s not really conscious or thinking anyway, because no list of stored IF/THEN rules can emulate the creativity of the human mind.

    Goodness! We’ll have practically written a book by the time we’re finished with this topic! 🙂

  10. Zach Blankenship says:

    Congratulations on an excellent argument Steve,

    I favor the idea of ‘un-physics’ after all it is your ideas regarding matter as merely a form of organization that have had the largest impact on me overall.

  11. Pius Agius says:

    Steve,

    Why is it when I notice these long discussions I am drawn to them as a moth is to a flame. I find that I am a witness to understanding. Now let me make sense of this last sentence. When something is talked about or written about in such an open forum it captivates me. The good natured give and take is so different from the reat of the world which is usually , take it as we dish it out right or wrong.

    Here we are throwing ideas and concepts around and off the walls. Some of them will survive and others will fall apart but in the process I know that something is being achieved. You guys are laying the foundation of the philosophy of robotics. I know this does sound a bit too grand since philosophy and robotics have been around for a while. However here the actual practioners of this field are taking the ideas off the pages and breathing a new life into them and it is fascinating to see this evolve before my eyes.

    All of us who read these are the better for it.I cannot contribute, at times, But know that I find this stuff just what the doctor ordered since I need my fix of robot information.

    Look forward to more ‘short’ talks,

    Take care

    Pius

  12. maninalift says:

    It may not be intended as such but the Chinese Room is functionally equivalent to a rhetorical sleight of hand. The pieces of paper are a misdirection, if you take those away and conduct the thought experiment entirely in the mind of the imaginary subject then it is laid bare for what it is: If a person learns to answer any Chinese questions as if she understood without making any further associations then she does not understand Chinese.

    Well yes, given that we understand understanding a thing to mean that we can place it in the context of everything else and make the appropriate associations.

    And like the Turing test, when you really pursue this you see that in order to conduct a convincing conversation the subject doesn’t just need to learn the “rules of language” but needs to be able to make complex associations very much like “concepts”. In essence the subject needs to have a “world model” within which they understand the words.

    The “giant computer” argument is roughly saying that instead of keeping track of concepts and relationships between concepts, every possible combination of concepts and their relationships is encoded separately. Of course if this machine is to be unchanging it must be programmed from the outset to respond based not merely on the current input but on the whole history of inputs.

    It is this separation of the information that I think people associate with not understanding.

    However as I see it, to program the machine involves using another machine (human?) that really does have a “world model” with associations between information, and exhaustively asking it all possible combinations of questions and recording those answers. Looking up the answers on the “massive computer” is then just a proxy for asking the “programmer computer” the same question.

    p.s. In the modified Chinese Room experiment, it may be that that the subject has one set of concepts that relate to her English vocabulary and one set that relate to her Chinese vocabulary and that the two sets of concepts do not interact at all (as unrealistic as this prospect might be). In this case I would say the subjects brain contains two minds. This highlights the other point you make: The connection with the physical: neither of those minds could be complete and convincing in discussions without some sort of appreciation, whether actual or simulated of being wired up to a body with hunger, tiredness etc.

    The thing is information and relationships between information. It may just be my quantum mechanics/information theory prejudice as I tend to understand all of Physics this way.

    Finally I agree that it is good to be suspicious of impractical thought experiments. However, like the case of Maxwell’s Demon, if they seem possible *in theory* then it is very good for the theory to understand why they in fact are not possible.

    • stevegrand says:

      I’d go along with most of that. I think Searle was reacting against the prevailing paradigm, and Chomsky’s ideas on language were a significant part of that paradigm. But things have moved on, one hopes.

      Thanks for the input.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: