The post It’s a Question of Trust appeared first on ATZ OK.
]]>“It’s a question of trust. It’s a question of not letting what we’ve built up crumble to dust.” – Depeche Mode, A Question of Lust
The answer to my groups dilemma struck me as soon as I began to implement the administrative functionality. We really have two fundamentally different groupings at play: people and things. I was trying to represent both with a single entity, but they’re fundamentally different.
A Trust represents a collection of people who place ultimate faith in a subset of administrators within it. Those admins get special powers such as maintaining the user listings. The can’t delete a user from the system completely (only a system administrator can do that), but they can remove someone from the trust, which will revoke the access to anything they’d gained from its groups.
A Group can then be created within a trust, or can stand alone. When a group is part of a trust, the administrators will automatically be given access to it, including administrators added later. Alternatively, groups can be privately shared between two users. Without a trust to control their access, these groups are static and their membership cannot be altered. While a secret can be shared between multiple groups, or even between multiple trusts, any user with access can see a full list of those groups, but cannot alter that membership unless they have he appropriate administrative access.
Again we go back to how much easier this would be with a monolithic vault system, but it’s an inherently flawed design. It might solve a lot of problems with access, but it encourages duplication, and that’s especially harmful when you’re dealing with passwords. Passwords should change regularly, and when you end up with a password stored in two places, one of them will soon end up out of date, which is worse than useless.
The post It’s a Question of Trust appeared first on ATZ OK.
]]>The post Location, Location, Location appeared first on ATZ OK.
]]>So what’s more secure? A complex password in a desk drawer, or a simple one memorized and reused across multiple sites? For most people, probably the former. Chances are, you don’t have anything the NSA or the CIA would care about, so you’re more likely to be at risk from a teenager somewhere in Asia harvesting data for identity thieves than you are to have a spy breaking in to your office. Of course, this isn’t to say that writing down a password is a good idea, so what’s the better way?
I have a KeePass file that I’ve used to store personal passwords in for about a decade now. It’s got a good secure password that’s easy for me to remember, but it’s a pain to keep backups and make sure it’s in sync. You can use something like DropBox to make it available, but since DropBox isn’t really designed to be particularly secure, you’ve given an attacker an additional access vector. It’s also difficult to share any single item with someone else, without giving them the whole thing.
You could use a site like Lastpass or 1Password, but not only are these more tempting than a shiny red button to anyone hoping to harvest passwords for nefarious purposes (Lastpass has already been compromised at least once), but they could turn off their servers at any point and your saved data would all be gone forever.
The best solution, in my opinion, is to distribute the data and ensure no single point of access gives an attacker everything they need to break into the data. In PassBrain, I’m accomplishing this by providing the software to anyone who wants to install it on their own servers (so an attacker can’t just break into a site like Lastpass and access all the data, they’d have to know to target your password site specifically), and then additionally splitting the keys between client and server.
First, we have the user’s login password. This is of course stored nowhere except as a hash, which is great since we only ever need to check that it matches, but we can’t do this with our secret data, as it must be read back. So the next level is our public/private key pair, that’s unique to a single device for a user (if a user accesses the site with multiple devices, they’ll have a unique key pair for each). The public key is stored on the server in plain text, as other users of the site will need to access this in order to encrypt data to send to them. The private key is then stored locally on the user’s device.
If an attacker were to gain access to the server, they would have no way of extracting the data from the database, since they would need the private key from the user’s device. Likewise, if they have access to the device but not the user’s password to log into the site, they can’t access anything useful to decrypt with it. While an attacker could install a keylogger to get this information, at that point they could just log the passwords directly no matter how secure they are.
I also considered a model where a centralized hosted site like Lastpass is used to control user access, but it connects to a secure remote location to store the actual password ciphers that the client controls, but I don’t think this gains anything over simply hosting the site yourself. This could add an additional layer of security that’s worth looking into, but the more movie pieces you add to a system like this, the more potential doors you open up for someone to get in through.
The post Location, Location, Location appeared first on ATZ OK.
]]>The post Inclusive vs. Exclusive vs. Monolithic Grouping appeared first on ATZ OK.
]]>Let’s start by looking at some of the shortcomings of the vault system. One of the first things I did when I set up my vaults was create one to hold credentials that had just been passed over to us by another vendor from whom we were being transferred a new client. We had this great dump of credentials given to us in an excel spreadsheet (I know, right?), so before tossing it into a burning dumpster I created a single new vault for the client and saved them all there. This worked great, until I needed to share some information about a dev environment with one of our programmers.
I couldn’t give them access to the entire vault without giving them production credentials they shouldn’t have, so I had to delete the dev credentials and set up a second vault (so I now have “Client Prod” and “Client Dev”). Later, I needed to share a site login from that dev vault with QA, but they shouldn’t really have server info. Since I didn’t want to add a duplicate of the credentials in the QA vault, I had to choose between giving them the full Dev credential access, or splitting it again into three separate vaults. Of course most of these problems are solvable by setting up everyone with their own accounts, but sometimes that’s either impossible or impractical, and best practices of that sort are a subject for another day, so let’s address the issue at hand.
The approach I’ve been taking so far is to use a grouping system instead of a monolithic vault system. I’d create a Dev group, a QA group, and a Prod group. That allows us to tag a secret as something like “QA, Dev” and it would give access to the appropriate groups. This is a much better system, since I can reuse the same secrets between groups, and assure that any updates made will be reflected and you don’t end up with stale data as a result of duplication. But what about the client? If I create a Client group and add them all to that group, I end up with too much overlap (having a QA person added to that client group will give them access to the Dev and Prod secrets in the Client group as well).
Exclusive grouping solves that problem, by which I mean a user must be a member of every linked group instead of just needing to be in any group. So if a secret is grouped as Client and Prod, someone with Client access but not Prod access won’t see it. This isn’t a great solution either, because it means you need to add everyone to a ton of different groups. Particularly when you add a new group, you have to go back and make sure everyone is included in it.
Of course you could get into hybrid solutions, like groups themselves being flagged as inclusive or exclusive (so adding QA as an inclusive group will add everyone in QA to it, but adding Prod as an exclusive group will lock out anyone who isn’t in Prod), but then you end up with a system far too complex to be useful. You could also have an inclusive master group with exclusive subgroups (so Client might be a master group, giving anyone access, but Prod could be a subgroup added that locks down only certain items), or vise-versa.
All things considered, the best solution for now is the one that gets the job done with the least complexity and the most clarity. Looking over the options, I think inclusive grouping is the way to go, but I understand now why someone might choose the monolithic vault structure for simplicity’s sake.
The post Inclusive vs. Exclusive vs. Monolithic Grouping appeared first on ATZ OK.
]]>The post Building a Secure Team Password Store appeared first on ATZ OK.
]]>Now several years later, I found out I wasn’t the only one thinking about this problem. A coworker introduced me to 1Password for teams, and I was very pleased to find somebody had already done all the hard work for me. We’ve been using it for a few months now, and while I’ve had some difficulties with its interface and vault structure, it’s been a very useful tool. So useful, in fact, that my team has become completely dependent on it and have moved almost all of our server information and confidential data into it. This is great from a convenience standpoint, but the problem is that as with any proprietary system, it’s a black box that we’re locked into. Not only do I have to take their word for it that it’s secure, but I have no reasonable way to get my data back out or transfer to another vendor, so I’m at their whim and whenever they start to charge a fee for the service (which they’ve announced they will but not how much), my data will be essentially held for ransom.
So what about an open source alternative? I’ve looked around, and couldn’t find much that fits the bill, so I’ve started my own, PassBrain. Going back to my conversation four or five years ago, the system is built on he core concept that even someone with full access to the data should be unable to reasonably decrypt it. This means that all secrets are encrypted using a private key that never leaves the user’s device.
This works fairly elegantly, as any time a user updates a stored secret (or creates a new one), the data can be locally encrypted on their device (the plaintext value is never sent over the network either) using their teammates’ public keys. Their teammates then download the cipher text and decrypt it locally.
It’s been a fun project so far, and is already at the point where I’d trust it with secure data, but as with all things, the details of user and group administration will probably take some time. More details to follow!
The post Building a Secure Team Password Store appeared first on ATZ OK.
]]>The post Asynchronous Turn-Based MMO appeared first on ATZ OK.
]]>
So that got me thinking, why wait? If you’re directly interacting with another player, such as in combat, you want the turn-based gameplay to be seamless. But if you’re alone in the woods, or only fighting AI opponents, why does it matter? I’ve talked before about cell-based map structures, and I got to thinking that something like that would be perfect for an asymmetric or asynchronous MMO.
Basically, the idea is that each individual game cell is a separate time-line. If nothing is going on in a cell, it doesn’t need to age at all (although background processes could probably help keep these going if AI actors are present). If only one player is in a cell, it can age exactly at their pace, so the experience is of a single-player game. If more than one player is active, then it would switch to the multi-player rules. This would probably be a round-robin experience with a time limit on the turn.
The post Asynchronous Turn-Based MMO appeared first on ATZ OK.
]]>The post HOWTO: Set up a Persistent Django Environment in Ubuntu on Amazon’s Elastic Compute Cloud (EC2) appeared first on ATZ OK.
]]># find an appropriate ami
# I very much like the offerings at http://alestic.com/
# add it to a security group
# open port 80 on that group
# set up server
apt-get update
apt-get install vim
apt-get install subversion
apt-get install apache2 libapache2-mod-wsgi
apt-get install python-setuptools
apt-get install python-mysqldb
# the following are optional, but will be needed if you want to package
# an AMI to reuse this image.
apt-get install ec2-ami-tools ec2-api-tools
# you may need to set JAVA_HOME
# mysql
apt-get install mysql-server
mysql -u root -p -e “CREATE DATABASE django;”
# install django
cd /opt
svn co http://code.djangoproject.com/svn/django/trunk/ django-trunk
ln -s /opt/django-trunk/django /usr/lib/python2.6/dist-packages/django
ln -s /opt/django-trunk/django/bin/django-admin.py /usr/local/bin
# I use /srv instead of /home because home directories are for users,
# whereas django code is theoretically shared and code being served
# belongs in /srv (or /opt, but that’s more for content that’s not being
# specifically served up, like the django source earlier)
mkdir /srv/django
cd /srv/django
django-admin.py startproject mysite
# set up wsgi
http://docs.djangoproject.com/en/dev/howto/deployment/modwsgi/
# edit /etc/apache2/sites-enabled/000-default
Alias /robots.txt /srv/django/mysite/static/robots.txt
Alias /favicon.ico /srv/django/mysite/static/favicon.ico
Alias /media/ /srv/django/mysite/static/media/
Alias /admin/media /srv/django/mysite/static/admin-media
Alias /styles/ /srv/django/mysite/static/styles/
Order deny,allow
Allow from all
WSGIScriptAlias / /srv/django/mysite/apache/django.wsgi
Order deny,allow
Allow from all
# end edit
mkdir /srv/django/mysite/apache
mkdir -p /srv/django/mysite/static/media
mkdir /srv/django/mysite/static/styles
ln -s /usr/lib/python2.6/dist-packages/django/contrib/admin/media /srv/django/mysite/static/admin-media
# edit /srv/django/mysite/apache/django.wsgi
import os
import sys
sys.path.append(‘/srv/django/mysite’)
sys.path.append(‘/srv/django’)
os.environ[‘DJANGO_SETTINGS_MODULE’] = ‘mysite.settings’
import django.core.handlers.wsgi
application = django.core.handlers.wsgi.WSGIHandler()
# end edit
# test it out
apache2ctl restart
curl http://localhost/
# should output html including ”
”
# set up the db
# edit /srv/django/mysite/settings.py
DATABASE_ENGINE = ‘mysql’
DATABASE_NAME = ‘mysite_django’
DATABASE_USER = ‘root’
DATABASE_PASSWORD = ‘password’
# end edit
# edit /srv/django/mysite/urls.py and uncomment out the admin stuff
# edit /srv/django/mysite/settings.py and add “django.contrib.admin” to your INSTALLED_APPS setting
# and set ADMIN_MEDIA_PREFIX = ‘/admin/media/’
cd /srv/django/mysite
python manage.py syncdb
# check out http:///admin/
# log in!
# create volume
# attach volume /dev/sdh
mkfs.ext3 /dev/sdh
echo “/dev/sdh /vol ext3 noatime 0 0” >> /etc/fstab
mkdir /vol
mount /vol
df –si
# should show up!
# move the database
service mysql stop
mkdir -p /vol/mysql
mv /var/lib/mysql/mysite_django /vol/mysql
ln -s /vol/mysql/mysite_django /var/lib/mysql/mysite_django
service mysql start
The post HOWTO: Set up a Persistent Django Environment in Ubuntu on Amazon’s Elastic Compute Cloud (EC2) appeared first on ATZ OK.
]]>The post Solutions for Running Django in Cloud Computing Environments appeared first on ATZ OK.
]]>“Since App Engine does not support Django models … The authentication and admin middleware and apps should be disabled … Sessions also depend on Django models and must be disabled as well.”
Obviously this is a big problem, as disabling models, authentication, admin, and sessions, pretty much neuters Django to the point of uselessness.
This afternoon, however, I came across a blog by Thomas Brox Røst which explains how to adapt your Django models to support Google’s data framework internally, which should hopefully lead to transparency from the DAL up. My only worry is that this is so simple that its very omission in Google’s own how-to seems to imply there’s something wrong with it in practice.
Alternatively, Mr. Røst has also written up a nice tutorial on enabling persistent storage in an Amazon EC2 instance by placing a PostgreSQL database in an EBS virtual drive. The great thing about this is that it’s a real live database, so the only programming changes required might be in my stored reporting queries, which are written for a MySQL database instead of PostgreSQL. Compared to a dedicated hosting environment, it’s apparently even cost-competitive.
Either way, I love the idea of using Django as a platform-independent development framework that can not only be ported between operating systems, but into basically any environment that can run Python and support some sort of data store. When you look back at what Sun was going for ten years ago, Python and Django seem to have accomplished more without even trying.
The post Solutions for Running Django in Cloud Computing Environments appeared first on ATZ OK.
]]>The post Distributed Ticket Sales System appeared first on ATZ OK.
]]>Next time, I’m going to have to get out the big guns: Amazon’s Elastic Compute Cloud and Simple Queue Service. The idea is that, before a user is allowed into our ticket system, they’ll have to sit in a queue and wait their turn.
The queue will be a simple script running on the EC2 service that stacks everybody up in SQS. When a user is ready to be passed on, they’ll be marked as such in SQS then passed over to the ticket system. The ticket system then has to simply check SQS and verify that they’re approved for entry, and they can then proceed to go through with the purchase.
The queue can either be updated by the ticket system whenever an order is completed or times out (allowing N number of simultaneous purchasers), or simply by acting on a timer (allowing 1 additional purchaser per N seconds), or even more sophisticated means (CPU and memory monitors dropping below a certain threshold).
There’s a pretty good primer on using python and EC2 here.
The post Distributed Ticket Sales System appeared first on ATZ OK.
]]>Cleverbot actually takes a simpler approach than this, and that’s what really piqued my interest about it. Instead of making a real attempt at behaving intelligently, it simply learns to mimic. This makes quite a lot of sense, as that’s how humans learn anyways (watch a toddler around adults some time).
The bot, unfortunately, is crap. This is most likely because people chatting with it know they’re chatting with a bot, so the data set it has available to learn from isn’t a proper example of intelligent conversation. It’s really no more likely to pass a Turing test than Eliza was.
I got to thinking though, why don’t we see this sort of learning in games? I could probably put together a simple tic-tac-toe program that easily learned to play a perfect game without ever having the algorithm programmed. It would simply record every move of a game it’s played so far, so that in the future it could examine the board’s state, and repeat any action for the same state previously that resulted in a win. This is basically how chess programs work anyways, except that the data is all pre-programmed from games of the masters.
The tricky part of this sort of AI, I suppose, is that games have just gotten too complicated. A tic-tac-toe board has a very limited number of possible states before a win has been reached (so few that I’ve written a simple tic-tac-toe program before in about 100 lines of code), but even moving up to a still simple game like chess or go, the number reaches astronomical levels. One would assume then, that to apply such strategies to a game like Civilization would be impossible, but I don’t think that’s necessarily the case.
The key would be to simply make the conditions fuzzy enough. Think like a human player, who might learn a lesson such as “when my treasury is low, I should build banks.” By identifying a few hundred key measurements, an AI could track any actions it took in those situations (and of course track the human, as well). These would be stored in a database and flagged as part of either a winning or losing game. Repetition of this process would allow the best and worst strategies to filter to the top and bottom, and could be picked accordingly, based on the game difficulty chosen.
This has the incredible advantage of being truly what people want when they choose Easy or Hard difficulties in strategy games. Ideally, they want a simulation of either playing against a poor or exceptional player, respectively. Instead of adjusting production bonuses, a simpleton AI would purposely make bad decisions, or at the very least not take full advantage of its learned behaviors.
So why hasn’t this happened yet? I first heard mention of games be programmed to use learning AI routines in the early 90s, but I’ve seen about as many examples of it as I have playable Virtual Reality games.
]]>The post Persistent Browser-Based Massively Multiplayer Collectible Card Game Idea appeared first on ATZ OK.
]]>The main game screen will allow drag & drop of all the cards onto a grid abstractly representing your city. This will allow you to lay it out in an RTS-like strategic grid, which will affect siege combat. When attacking another player’s city, combat will be procedurally determined based on your troop formations and their city’s layout, thereby allowing a city to defend itself even if its owner is not online.
The post Persistent Browser-Based Massively Multiplayer Collectible Card Game Idea appeared first on ATZ OK.
]]>