pnathan: elephant bypasses fence to drink from pool (Default)
An amusing anecdote from the Clink development trenches.

Intersection is, roughly, O(n^2). To be more precise, it's O(min(n,m)^2), where n and m are the lengths of the input vectors.

This turns out to be hugely important when implementing JOIN in a relational database, because you wind up intersecting n-ways for n tables.

Some empirical analysis of a 3-way intersection:

Intersect from largest to smallest:

CLINK> (time (ref *foo* :row 
   (multi-column-query *foo*
       `(1 ,#'(lambda (s) (find #\1 s))) 
       `(0 ,(lambda (x) (> x 300))) 
       `(3 ,(lambda (x) (> x 900)))))))
Evaluation took:
  4.015 seconds of real time
  4.149000 seconds of total run time (4.149000 user, 0.000000 system)
  103.34% CPU
  8,676,250,063 processor cycles
  1,252,768 bytes consed

And from smallest to largest:

CLINK> (time (ref *foo* :row 
   (multi-column-query *foo*
       `(1 ,#'(lambda (s) (find #\1 s))) 
       `(0 ,(lambda (x) (> x 300))) 
       `(3 ,(lambda (x) (> x 900)))))))

Evaluation took:
  0.766 seconds of real time
  0.879000 seconds of total run time (0.879000 user, 0.000000 system)
  114.75% CPU
  1,655,372,433 processor cycles
  1,074,592 bytes consed

We can clearly see that our runtime dropped by roughly 4x, cycles dropped about 7x, and our allocations by about 0.2x.

One of the most fun things about the Clink project is that it directly uses concepts from undergraduate computer science courses and applies them.
pnathan: elephant bypasses fence to drink from pool (Default)

Common Lisp tooling typically isn't oriented around the continuous integration/build systems that we're accustomed to in 2014.

I don't terribly like that, particularly since it's one of the few tools that have been demonstrated to work, and work well, in software engineering.

Anyway, I updated my TOML parser to work with (Travis CI)[] (but the principles are the same regardless of box host). Here's the code, followed by a write-up.

As a YAML:

  - curl -O -L
  - tar xjf sbcl-1.2.6-x86-64-linux-binary.tar.bz2
  - pushd sbcl-1.2.6-x86-64-linux/ && sudo bash && popd
  - curl -O -L
  - sbcl --load quicklisp.lisp --eval '(quicklisp-quickstart:install)' --eval '(quit)'
  - sbcl --script run-sbcl-tests.lisp

Where the run-sbcl-tests.lisp looks as follows:

(require "sb-posix")
 (let ((quicklisp-init (merge-pathnames "quicklisp/setup.lisp"
  (when (probe-file quicklisp-init)
   (load quicklisp-init)))
(defparameter *pwd* (concatenate 'string (sb-posix:getcwd) "/"))
(push *pwd* asdf:*central-registry*)
(ql:quickload :pp-toml-tests)
(let ((result-status (pp-toml-tests:run-tests)))
 (sb-posix:exit (if result-status 0 1) ))

Under the hood and in Lisp, pp-toml-tests:run-tests drives some fiveAM code to run the extant tests. FiveAM is, as far as I can tell, designed for on-the-fly interactive testing, as most Lisp tooling is. It was entirely too surprising in an integration with continuous integration. I've written my own bad hack for a unit testing framework, the "checker" system, designed for running in CI, but it's kind of, well, a bad hack. I should look elsewhere.

A few key highlights of what's going on in this code:

  1. I use old-style ASDF registry manipulation to dynamically set where we should be expecting our systems under test to be.
  2. This code relies on the SB-POSIX SBCL extension - I expect other systems will be able to do the same functionality, but SBCL is what I primarily use.
  3. I hand-load Quicklisp each time. That's not ideal, and should be changed when I update pp-toml to TOML 0.3.

Hope this helps your Common Lisp integration testing!

pnathan: elephant bypasses fence to drink from pool (Default)
My VPS died a week or so ago, and I havn't had time to get back to it until today. I wanted to drive forward using a more modern approach to configuration management, so I cooked up a Docker image with Apache serving the static sites (much like I did the old VPS configuration).


In order to do this:

- diddle DNS with network solutions away from VPS to my DigitalOcean box.

- diddle DNS with Gandi ( a classier provider)

- figure out which user can log onto a digitalocean coreOS box.

- upload docker image to dockerhub

- learn that my custom static site generator Volt never was in a stable state when I generated is now only a collection of markdown files.

- figure out systemd enough to get docker running on boot

- fix to have a derpy page but not an apache index

- I bounce Jenkins and find out that Jenkins is *now* crashing the jvm somehow, thus dropping my ability to keep my network sane i.e., my regularly scheduled ansible run.
pnathan: elephant bypasses fence to drink from pool (Default)
I've recently switched positions to one more Linuxy.

In this situation, I have, roughly, several options for programming languages to do my job in. My job will entail devops/tools/scripting sorts of things. Things that I need to be able to do include - shelling out, string futzing, regex work, bundling thingies and sending them to remote servers, and *so forth*.

On the plate of possibilities we have - Ruby, Python, Clojure, and Go. Python is the officially preferred language (as well as the common tooling language).

Roughly, the first three languages are similar - dynamic typing, interpreted(ish), unsound type system, relatively easy regexes, and not terribly fast. Go is different - it is statically typed, compiled, also has unsound type system (lots of potshots have been made here), and is actually reasoanbly fast. Go and Clojure both have a relatively sane multithreading story.

I evaluate languages on several areas:

0. Maturity. This is a bit hard to define, but it roughly translates to the amount of times angry engineers have forced changes in the language because mistakes were made (along with the willingness of maintainers to make this change). A good language has had this happen early on and very few aspects are widely considered to be wholly bad design at this point.

1. Expressivity. Where does it live on the Blub lattice?

2. Speed. Contrary to many popular opinions, speed does matter at times, and you need to be able to break out the optmization guns when the occasion demands.

3. Correctness of written code. How likely is an arbitrary piece of given code likely to have bugs? Something like Agda might rate very low here, and something like hand-written hex might rate very high. This is typically achieved by a combination of tests and type systems. Tests rely on having reasonable "seams" in the code creation process, where test instrumentation can be injected and the module evaluated for correctness.

4. OODA decision rate. How fast can you observe your code's effects and change it? Lisps traditionally rate extremely high on this, with C++ ranking supremely low.

5. Quality of implementation. Separate from maturity, the compiler/interpreter actually defines the language you use - it reifies the language. It should, therefore, be a very solid system.

6. Library breadth and depth. Much like implementation and maturity, libraries that have been around a long time, and have had the bugs ironed out of them provide a better service to the language.

I plan to work through each of the 4 languages and write a simple tool to parse logfiles in each language, summing up my experiences with each language as I go through.
pnathan: elephant bypasses fence to drink from pool (Default)
The code for Cusp/lispdev is, roughly, working.

"I've taken the Lispdev code and gotten it sort of more or less working. That is, I can communicate with the REPL within Eclipse. Stack traces sort of work. Highlighting works. And a few other things work. There's a lot more capability in the code that needs to be enabled and winkled out." - from my post on reddit/r/lisp/.

Looking at automating the builds now.


Feb. 16th, 2014 02:00 am
pnathan: elephant bypasses fence to drink from pool (Default)
One project that has languished for years is the CUSP Common Lisp plugin for Eclipse. There's a fork, Lispdev, also abandoned.

It's very aggravating, frankly. Lispdev doesn't appear to work - the preferences pane isn't even set up. Booting throws exceptions right and left.

CUSP doesn't really work on install, failing to lock into SBCL, and things are eh'.

Lispdev is about 28KLoC and Cusp about 19KLoC, all in Java, of course.

Feh. I want to get this working to the point of releasing a SBCL-working version. Let's see if interest exists for further work past that.
pnathan: elephant bypasses fence to drink from pool (Default)

I'm on the record of denying the idea of software engineering. We don't work with the physical the way physical engineers do. We don't have the science<->practice chain the way the regular engineers do. Worse, I think the IEEE SWEBOK is horse poop.

But there's still a practice and rigor to software development. I've been in this world for a few years now, in a few different teams. I've drunk from the wisdom of others. I think I can say something not entirely worthless about the matter of writing good software.

The first thing is the goal.

  • What is the intended product?
  • When must it be done by?
  • To what end are we undertaking this effort?

These questions describe the scale of the effort, the kind of people asked to work on the effort, the tools used during the effort, and the process best implemented during the execution. This is a very simple set of questions designed to understand what the problem is and where you want to go. Put simply, these are the strategic points required to put substrategies and tactics into play.

The common ends in business are "makes us money" or "saves us money". The time it must be done by is usually impossible and best described as "yesterday". The actual product is often mutable and is the usual concern of software creators.

The second thing to consider is the famous cost-quality-speed triangle (pick two). Your company mandates the quality. While you the creator control it, your company may find you lacking if you mishandle it. This is partially true of speed, however.Very few products make or break a company by release date, and software projects are notorious for being late. Particularly when estimates from the line people are disregarded. Cost is, again, something you don't really control for software projects: it's labor + overhead for your workspace + support staff.

As the creator of software, you can materially affect speed and quality. Let us presume that you are going to work your usual 35-45 hour work week and have a reasonable competence at the particular technology that you're dealing with - same as everyone else. How do you manage - from your level as an individual contributor and perhaps mentor to others - keeping things working in alignment with your company? That is the next blog post.

The third thing to consider is politics, or, more euphemistically, "Social questions". An old rule of thumb is that software architecture reflects the organization it was developed in. Another rule of thumb is that people are promoted to their level of incompetence. Yet another is that most organizations stratify vertically and attempt to build internal empires. Let us not assume that our fine institution is not subject to these pressures. It probably has already succumbed in part.

Several implications result from this.

  • Only you and the others tasked with your project are actually incentivized to complete it. Others may be incentivized to support your organization. Result - when you have to work with other groups, ensure that they have an axe to grind, a wagon to pull, some interest in helping you get your job done. You need them to help, but they can either help you right now or maybe later. I can't remember how many emails I've written that have gotten dropped on the floor.
  • The software architecture is not per se the best one for the technical task. It did, however, represent a satisficed social architecture for the task & person work division to allow the workers to effectively operate.
  • Your software probably duplicates someone else's, and they won't want to merge. Your pressure and your silos have subtley different constraints and needs than other silos. Often, the constraint is as simple as "Bob reports to me and fixes bugs the day I ask, but you, Alice, don't report to me and may find my bug to be rather PEBCAK and not fix it". While this is more than slightly silly, it really does have implications. There's no use having centralized software if only some users are served. The others will decentralize themselves and get their jobs done.
  • Incentives usually produce results designed to ensure more incentives. E.g., if you are not rewarded for fixing problems but instead are rewarded for moving fast, then you won't fix bugs, you'll move fast.

None of these things have to do with software engineering; they hold pretty true pretty well across any producing endeavor. But they lay the context and foundation for the next blog post.

pnathan: elephant bypasses fence to drink from pool (Default)
Looking over, I am forced to think about monetization of Critters (should it be Critterz?).

It seems clear that IAP enables 'whale' behavior, which substantially increases total & average revenue per user. Since I am someone who likes making money (and likes having a way to keep getting the customer's money), it makes sense to figure out how to 'play' IAP.

By the way, I'm mentioning numbers here, but these numbers are, flatly, provisional and are not final.

Key idea: You get your game and the non-cosmetic content by giving an up-front fee. Cosmetic & 'hard-core' play costs money. Things that increase database burden cost money to cover it. No handwaving.

Set up account and play the game, free, for 24 hours.

Purchase the game for a price point (between 2.99 and 4.99).

This gets you access to N critters. You can play the critters as much as you like, as long as you like,until the service shuts down. Since the game is running online, you will get updates as part of your purchase.

Certain add-ons will cost money. For instance,

- a default emotion set will be available for dogs, cats, and foxes(have to figure out the picture rights for dogs & foxes, since I don't have either). If you want to upload your own pictures, that will be a charge. Say, $0.99 for a picture set.

- If you want to write a history of your critter, it will cost some $X (not too expensive) per critter. Maybe more if you really want to write a lot. This is directly targeted at role-players and people who want to record their virtual pet's story.

- Another idea might be swag. Say, you can feed your cat - Amos - kibble every night. He is a happy cat. However, you can buy 'Tuna' from the swag menu for 0.25[1]. Amos adores tuna, adores you for giving you tuna (boost in stats and behaves better). You feel good, and I get a wee bit of money.

Fundamentally, I am someone who played games as a teen and young adult - you bought them, and that's all. No continual mooching. I played WoW. It seemed reasonable to pay a monthly cut. This worked out, as I knew that they were keeping servers alive and improving the game. I don't want to play a game where I have to 'insert a quarter to keep playing'. Holding your experience hostage to money seems... off. It's not above-board. It's like if a hotel informs you that in order to turn the lights off to go to sleep, you have to pay extra. And then to pay to turn the lights back on. Yech.

Seems much more fair and honest to charge up-front for a fair and reasonable service, with any premium services available and marked as such.

[1] This might actually not be workable due to payment processors wanting a cut. If they want a $0.25 min transaction, it'll have to be more.
pnathan: elephant bypasses fence to drink from pool (Default)
I've "relaunched" faegernis.

It used to be my personal site, but you know how hard it is to spell out faegernis to people? Too hard. Anyway -

Perhaps about 18 months ago I had the idea of "code as art". In particular, is it possible to consider code as art without reference to extant art forms (poetry, visual designs)? Some hold that obfuscated code (IOCC) is an art, but I'm not looking for crafty work, I'm looking for Art.

Another way to think about it is - what makes Quicksort and Floyd-Warshall's algorithms so beautiful?

Or, what if Quicksort and Floyd-Warshall's algorithm represent a minimalist aesthetic best suited to the Modernist conceptions (e.g., Apple hardware design taste), and other equally viable aesthetics for code exist, such as baroque & rococo aesthetic?

I want to explore these ideas with faegernis. I don't know where I'm going to land or how I'm going to get there, but I think it's something that needs doing.
pnathan: elephant bypasses fence to drink from pool (Default)

I've taken an at-home coding vacation this week and last week. I've been doing reading on model railroads and sailing, as well as a smattering of other books.

I flipped through the list of the papers from POPL2014; it's kind of frustrating to me- they all appear to be focused on type systems. I'm not sure why this is such a thing in programming language design (perhaps it is enjoyable to work on the math!). But I don't think the big problem - software crisis if you will - is in data types. Data types in most of the developer's world sits in the grungy province of the C(C, C++, Java, C#, ObjC) family or in the more fluid province of the Perl family (Perl, Ruby, Python, Groovy, etc). Neither of these languages has the type system problem solved even at the level of ML languages (which are, AFAICT, sort of old hat today). So from a "cool insights useful in fifteen years" level, I'm depressed! These results probably won't ever get to an Ordinary Practitioner.

For me, the bridge into the Best Possible World is founded in Software Contracts, most commonly associated with Eiffel and occasionally supported in obscure extensions for other languages. Suppose that I claim that not only will some variable quux is an int, (solvable in C) but in fact it is between 0 and 10 (solvable in Ada, somewhat in SBCL, and perhaps in the Agda & family of dependently typed languages), and not only that, quux will always be passed into frobbing_quux and used to generate a result (similarily quantified). Let me call that "Code Type System". It may be that complete analysis of the code demands a complete type system of the data. Certainly some of this work has already been done in the model checking & static analysis community, as they can already detect certain concurrency bugs and malloc/free errors. Then, of course, we find ourselves building a model of the program in our "Code Type System" before we actually build the program, and if we find we need to alter the system, we have to rebuild the model framework. This is well understood to be the classic failure model of fully verified software development.

Let me instead take this thought experiment by way of an example, using Common Lisp.

(def-checked-fun mondo (quux baz)
  "Some docs"
  (check (and (calls-into frobbing_quux quux)
                     (type-of quux integer)
                     (range quux 0 10))
  ;; do stuff
  (frobbing_quux quux)
  ;; do stuff

The theoretical def-checked-fun macro will examine the code, both as-is and macro-expanded, verifying that it does indeed appear that quux satisfies the required judgement. Of course, we can't answer that quux is of a particular type or that it falls into a certain range at this point: in order to make that judgement, either run-time checks need to be added (to constrain possibilities), or intraprocedural analysis needs to be performed. However, the calls-into check can be either demonstrated or an "unproven" result returned. This is simple - some CAR needs to be frobbing_quux with a bar in the argument list.

Some of this, in some measure is already done in SBCL. I have long thought (at least 8 months now), that a compile-time macro system (def-checked-fun is an example) could provide some very interesting insight, particularly if you begin to share information).

The worst possible result is that the checker returns something to the effect of "No possible errors detected; no data found to make judgement". In short, it's useless. The best possible result is that intraprocedural contracts can be built based on not simply types but expected semantics in a succinct system (only modelling what is useful), then when time comes for running, all this information is quietly removed.

I propose that this is a useful idea, and practical, too - for an Ordinary Practitioner. It's like lightweight unit tests that require almost no lines of code and almost no tweaking.

It's important to understand, of course, that I'm thinking about "correct" results, not "complete" results, quite two different things! Dawson Engler's talk/paper "A Few Billion Lines of Code Later" really strikes at the heart of what makes a system useful for this kind of work in practice and the exigencies of real world work. I won't summarize it here.

What I don't know is if CoQ or other systems (ACL2, etc) already implement this sort of fluid procedural static type checking. Possibly they do.

pnathan: elephant bypasses fence to drink from pool (Default)
I've been doing Objective C development at work and it spilled out this last week (oh noes) in a Valentine's Day program at home for my wife - sort of an e-greeting card program. After I finished it, I realized that I could actually generic-ize it and sell customized versions to people.

I've written the app and got a landing page developed. I'm thinking that I'll go two ways:

- Gumroad for a non-customizable Valentines Day app.

- PayPal for a customizable app. This will involve a custom build process for each request, and I'm not ready to automate this process yet via some sort of Ruby on Rails job, since I don't have any revenues. It'll be faster and simpler just to do a custom build and .app ship for each user for a while, I think.
pnathan: elephant bypasses fence to drink from pool (Default)
I'm a *huge* fan of Mozilla Rust the programming language. The short reason is that it's a powerfully typed language with optional garbage collection & ability to call into unsafe code.

I have a playground of data structures (flaky data structures, get the pun? ha ha). I've roughly kept it maintained for about a year now & updated some of it recently to Rust on master.

Wow. So change compared to Rust 0.6.

* No more @ pointer. Now it's rc::Rc, .borrow(), and .clone(). Really tedious.

* Total confusion on my part on how to build traits for things that wind up being rc:Rc'd. Still have no idea. I'll need to sort this out with #rust at some point.

* match(ref foo, ref bar, ref baz) is new. Argh!

Other than that, there are a few oddities but nothing catastrophically weird. Although it was vaguely amusing writing myself an infinite loop by accident, I was able to get the linked list and circular buffers compiling.

Next time I'm looking for low-stress coding & debugging, I'll fix up the binary tree and start work on a 1-dimensional range tree (Data structure #1 in Samet's multi-dimensional data structures book).
pnathan: elephant bypasses fence to drink from pool (Default)
The Lisp REPL is a particularly awesome tool, particularly when paired with SLIME or other customized evaluation system for live programming.

This insight has led to R, ipython, Macsyma, MySQL, Postgres, and other systems having their own REPL.

However, a serious problem in the Common Lisp REPL is the inability to sling large sums of data around easily, perform queries, etc. It's simply not built in to the system to have multimillion rows of data, perform queries on it, and feed it into particular functions. Lists are too slow; vectors are too primitive, hash tables are too restrictive. Further, queries start looking really hairy as lambdas, reduces, and mapcars chain together. SQL has shown a clearly superior succinctness of syntax. Worse, these queries are ridiculously non-optimized out of the gate. I've had to deal with this situation in multiple industry positions, and it is *not* acceptable for getting work done. It is too slow, too incoherent, and too inelegant.

Hence, I am working on a solution; it started out as CL-LINQ, or, Common Lisp Language INtegrated Queries, a derivative of the C# approach. The initial cut can be found at my github for interested parties. It suffers from a basic design flaw: 100% in-memory storage and using lists for internal representation.

I am proud to note that I've been able to begin work on an entirely improved and redesigned system. This system is derived from several key pieces. The first and most important is the data storage system, which is what I've been working on recently.

Data is stored in data frames; each data frame has information about its headers. Data itself is in a 2D Common Lisp array, ensuring nearly-constant access time to a known cell. Data frames are loaded by pages, which contains a reference to the data table, as well as a reference to the backing store. Pages store information about the data in the data frame. Each page has a 1:1 mapping to a data frame. Pages are routed through a caching layer with a configurable caching strategy, allowing only data of interest to be loaded in memory at a given point in time. Finally, a table contains a number of pages, along with methods to access the headers, particular rows in the table, etc.

After this system is done (perhaps 80% of the way done now), then the index system can be built. By building the indexes separate from the raw storage system, I can tune both for optimal behavior - indexes can be built as a tree over the data, while the data can be stored in an efficiently accessible mechanism.

Finally, as the motivating factor, the query engine will be designed with both prior systems in mind. The query engine's complexity will be interacting with the index system, to ensure high speed JOINs. A carefully developed query macro system could actually precompile desired queries for optimal layout and speed, for instance.

Features that will be considered for this project include - integration with postgres as the storage engine - compiled optimization of queries - pluggable conversion system for arbitrary objects and their analysis.

At the completion of this project, a library will be available for loading large amounts of data into data tables, computing queries and processing upon them, and then storing the transformed data into external sources.
pnathan: elephant bypasses fence to drink from pool (Default)
Looking though Hacker News, I see that Computer Modern was ported or something to the web. Perhaps half of the comments are rants about how Computer Modern doesn't fit current design style and fad.

I don't really get the hate - I've always viewed CM as a strikingly elegant and timeless font. In part, I suppose that's my perspective - I have several math books written in the 60s/70s - pre-TeX - where the book is literally typewritten, as in typewriter and monospace. Terrifically ugly, without a doubt. Further, after years of looking at fonts (& being a small bit of a font nerd when younger), I favor serifs. They look better, I think. More easy on the long-form eyes.
pnathan: elephant bypasses fence to drink from pool (Default)
One of the problems I've idly wanted to solve for a long time is the problem of automatic scheduling suggestion: you have a variety of things you want to do, but you don't know how to fit them all in.

Or, you have a variety of people wanting to find a meeting time, and have to poke around trying to find one that works. Blech!

I came up with a toy solver for that tonight using Prolog - it's over on a gist on my github account.

This particular solution aggressively leverages Prolog's unification facilities. First, I assert that a set of slots are available, then I have a query that determines available times.

In the available_times/3 predicate, it has three parameters - the first param (people being matched for) is a list pattern-matched into P1/PRest, then the timeslots. The head represents a person - we unify their availability, summoning the set of matching timeslots. Then, we move on down the list of people recursively with the Rest list. Eventually we terminate when the Rest list is nil. The key idea is that the Day & Time slots are being unified and the clauses in the query are *also* being unified. Therefore any solution for an individual person in the recursion chain is intersected with the solutions for the other people in the recursion chain; if there is an available timeslot found, it will be returned.

all_availability/2 is a mechanism to take a personlist and return a list of 3-tuples assigned to bag, using the findall/3 predicate.

It would be nifty to tweak this so that it was boosted to the point where, e.g., a list of classes w/ timeslots could be fed to it, then a set of possible class schedules spat out. Add on a prioritization scheme (must/may), a decent UI (Prolog's native CLI is awful), and this would be very useful.
pnathan: elephant bypasses fence to drink from pool (Default)
In Critters, it's very important to ensure that things work - no one wants their game to mysteriously break.

This is a technical discussion of the current plan - if you're not interested, you can skip this!

There are several basic parts of the Critters system: the database containing the information about the pets & accounts, the 'smart system' which updates the database regularly, the web server, and the web client. Right now, the database is pretty much done; the 'smart system' sort of works (mostly, I hope!), and the web server is "mostly" done. The client is about half baked though, and needs more work.

One key idea of engineering is the idea of "integration testing", where tests are run from start to finish on the product, verifying that components work together. In order to have a working client, we need to have a working server. I don't know that my client works unless the server works reliably. What I'm doing to test the server side of the code is building a "test client". This test client will be a library to talk to the server. Tests can be written with it (or, if I want, play the game on the command line).

The test client is under development right now. I'm writing it in Haskell, a language designed to be very exacting and catch errors. My plan is to release the source code of the test client publicly (Probably under a AGPL license). This way others can use it for example code (or perhaps use it for their own Critters client).

Anyway, back to the code!
pnathan: elephant bypasses fence to drink from pool (Default)
My major upcoming (side) project is Critters, a game I'm writing. It's design for mobile phones - specifically, the Firefox OS phone. Being as FirefoxOS is HTML5+JS for a front-end, this means that it will work on arbitrary browsers (which are relatively modern). The basic idea is a virtual pet game, but I plan to make it quite a lot more. In particular, I plan to build certain AI features in over time. Why is that? Well, I noticed that, hey, a lot of mobile phone games are kind of uninteresting; not much intellectual enjoyment.

So Critters is an exploration into what that will take. How much fun can you pack into a virtual pet game, anyway?

Well, the initial fun is going to be driven by Ridiculously Photogenic Pets. Primarily, my cat Amos. You'll be able to play with a virtual Amos at first. For someone who grew up playing Warcraft 2, Unreal Tournement, etc, this is pretty lane. So...

After the initial version is viable and spinning along, learned behavior will be the next key upgrade I'll make: how you interact with your pet will matter over time. Poke your cat? Eventually he'll be upset at you and not purr (& do other things)! The overall learned behavior ideal is to have behavior I never dreamt up showing up in your pet's actions. Pretty cool IMO.

Prosaically, the business model is going to be up-front payment + in-app purchase for add-ons for your pet(s). Fundamentally, I want to ensure that my user's interests for giving me money are aligned with me getting them a better game experience. Ads are such a poor experience in my experience!

On the sheer geek front, my plan is to publish the API and have an official specification. The official spec will be encoded as a Haskell CLI program. This way iOS, Android, and other applications can be created if someone really wants to make them.
pnathan: elephant bypasses fence to drink from pool (Default)

Bad DevOps practice, along with basic SW engineering ignorance, lead to almost half a billion dollars directly lost.

Remember to ensure that you can roll forward and back from any point in your deployment system. Configuration must be controlled just as much as pure software.


Oct. 21st, 2013 07:50 pm
pnathan: elephant bypasses fence to drink from pool (Default)
Looking at ObjectiveC today. I don't think it's a bad language. A bit weird. The combination of run-time dispatch + compilation is a strange one. I gather that it was an originally a C preprocessor. This shows in odd ways, especially with the reference counter system. The function declaration/call syntax is, IMO, a bit of a disaster. Square brackets, colons, etc. I keep reading the square brackets as Lispy brackets, where the function comes first, rather than Smalltalk brackets, where the object comes first.

Semantically I fear that dereferencing a nil pointer will be a big deal; run-time type errors are something I expect will become a big deal in my future.
pnathan: elephant bypasses fence to drink from pool (Default)
Ever since I started learning to write software as a teen, at a certain point in the night, my mind would detach from the hustle and bustle of the day and I'd be ready to think of new projects. I'd start a project, work on it for an hour, then sleep would overtake me and I would have to rest. Often these projects haven't gotten anywhere. But there's something very exciting still, almost 20 years after learning to write QBASIC, about writing code and creating something out of nothing.
Page generated Oct. 23rd, 2017 12:21 am
Powered by Dreamwidth Studios