Note: Please change your bookmarks and RSS feeds to point at joewlarson.com. Thanks!
Now that I've finished school, I have a few web-based projects I want to work on, and I also want to continue blogging. I didn't want to continue blogging at brokedbeta.blogspot.com, because I've decided the whole concept of "Broked" was a bit too negative, and besides I'd never intended blogspot to be my permanent home. After trying to come up with a new, equally creative and meaningful name, I eventually decided that using the name my parents gave me would be simplest and allow the most flexibility moving forward.
As I start, I've got a blog running using WordPress with a very basic theme based on Toolbox by Automattic, though I might eventually just create my own from scratch. I also imported some of my old blog posts from a couple of blogs I've had through the years. I have a lot more work left to do to get this site to a basic level beyond just running the blog, so please drop by for fresh looks at what I'm up to. Enjoy!
broked
The Trouble with College Group Work
Throughout my college career at the University of Washington Bothell, there was no end of bellyaching about group work. Personally, I found group work in most classes to be very painful and low in educational value. I completely understand the motivation for Universities to include group work as a core teaching tool for every class. It teaches many skills required to work in the professional world. As one of my Professors, Steve Schroeder, said a few years back, "In the working world, group work is the norm."
However, in the working world certain practices and conventions aid the group work process:
In school work, most of these aspects of group work are usually missing. This creates a frustrating and unrealistic group experience. It doesn't usually lead to much learning about about the subject of the course or about working in groups.
Without defined roles and responsibilities, students are left to sort this out themselves. In some ways this is a good exercise, but there are definitely some serious drawbacks. In a degree program there is usually somewhat less diversity in personality and skill types than in a working organization, so some teams are way off balance. Conversely, there is often a very large difference in skill level and experience. Some groups will have nobody who can do much of anything. These groups are in real trouble because not only do they not have designated roles but they don't even know what the roles would be. Other groups might have one person who can do it all -- and usually does end up doing all the work, to the detriment of their own schedule and other student's learning.
Without hierarchy, reporting and oversight, there are always going to be slackers who do the minimum possible. Instructors generally put the burden on the students to report these people or fire them. But I have almost never seen that happen. In short lived teams, it is usually not worth the tension. In longer lived teams, pseudo-friendships evolve sooner than work patterns become evident, and then it's just awkward.
Without a common workplace and schedule, it is extremely difficult to actually get the work done. Especially at a campus for working adults, finding a time window during the week where more than two people can get together is nearly impossible. Most group projects therefore move online and are done asynchronously. This is actually a good representation of modern work, with teams spread across branches or working from home. But it does destroy the chance for certain types of group learning, and it limits how much work can actually be done by such a team.
Without rational group design, instructors usually go with the randomly selected group, or the self-selected group. Both methods have their pluses and minuses, though neither is very realistic to the workforce. Randomly selected groups are most often associated with short projects, which is the worst fit because then there isn't enough time for group forming. Self-selected groups are usually more effective, but can leave a few groups of "leftover" people that are way worse off than the other groups.
Without a long lived team, there is little time for forming and norming (thanks Mr. Dimeo). Groups don't have enough time to gain any understanding of each other's skills, personality, and perspective, let alone develop optimal ways of divvying up and managing the work.
So what is a poor college Professor to do about group work? I believe the quarter-long group-project model is the best way to address these problems, and I have seen it used fairly successfully.
Instructors can fine tune teams in the early weeks to create more balance. They can effectively play the role of oversight by having frequent checkpoints and audits for progress. Time is allowed for team reshuffling (at the Instructor's direction) to correct issues of balance and compatibility. In practice I haven't seen as much of this as I'd like, but the opportunity is there for the Professor who takes it.
Students can "waste" those first weeks just getting to know each other and working out a common schedule. Natural roles and responsibilities can evolve. Real group learning experiences arise, as well as learning about how to work effectively in a team. Often, some pretty amazing work products come out of such projects. Best of all, real friendships can develop.
In closing, I'd like to say thanks (once again) to Professor David Socha for the email that prompted this blog entry.
However, in the working world certain practices and conventions aid the group work process:
- Defined roles and responsibilities. There is usually some stated understanding about what work is to be done by whom, though of course there are always gray areas.
- Hierarchy, reporting and oversight. Somebody (on the team or outside it) is clearly able to spot slackers and has the authority to call them out.
- Common workspace and schedule. Either group members are in the same office for N hours a day, or the employer has taken pains to bring them together in other ways (regular video conferencing, monthly meetings in person, whatever).
- Rational group design. The group has been put together by some intelligent process to create a workable team with some balance of complimentary skills.
- Longevity of teams. People tend to work with the same handful of people for 6 to 12 months or more, and develop certain efficiencies.
In school work, most of these aspects of group work are usually missing. This creates a frustrating and unrealistic group experience. It doesn't usually lead to much learning about about the subject of the course or about working in groups.
Without defined roles and responsibilities, students are left to sort this out themselves. In some ways this is a good exercise, but there are definitely some serious drawbacks. In a degree program there is usually somewhat less diversity in personality and skill types than in a working organization, so some teams are way off balance. Conversely, there is often a very large difference in skill level and experience. Some groups will have nobody who can do much of anything. These groups are in real trouble because not only do they not have designated roles but they don't even know what the roles would be. Other groups might have one person who can do it all -- and usually does end up doing all the work, to the detriment of their own schedule and other student's learning.
Without hierarchy, reporting and oversight, there are always going to be slackers who do the minimum possible. Instructors generally put the burden on the students to report these people or fire them. But I have almost never seen that happen. In short lived teams, it is usually not worth the tension. In longer lived teams, pseudo-friendships evolve sooner than work patterns become evident, and then it's just awkward.
Without a common workplace and schedule, it is extremely difficult to actually get the work done. Especially at a campus for working adults, finding a time window during the week where more than two people can get together is nearly impossible. Most group projects therefore move online and are done asynchronously. This is actually a good representation of modern work, with teams spread across branches or working from home. But it does destroy the chance for certain types of group learning, and it limits how much work can actually be done by such a team.
Without rational group design, instructors usually go with the randomly selected group, or the self-selected group. Both methods have their pluses and minuses, though neither is very realistic to the workforce. Randomly selected groups are most often associated with short projects, which is the worst fit because then there isn't enough time for group forming. Self-selected groups are usually more effective, but can leave a few groups of "leftover" people that are way worse off than the other groups.
Without a long lived team, there is little time for forming and norming (thanks Mr. Dimeo). Groups don't have enough time to gain any understanding of each other's skills, personality, and perspective, let alone develop optimal ways of divvying up and managing the work.
So what is a poor college Professor to do about group work? I believe the quarter-long group-project model is the best way to address these problems, and I have seen it used fairly successfully.
Instructors can fine tune teams in the early weeks to create more balance. They can effectively play the role of oversight by having frequent checkpoints and audits for progress. Time is allowed for team reshuffling (at the Instructor's direction) to correct issues of balance and compatibility. In practice I haven't seen as much of this as I'd like, but the opportunity is there for the Professor who takes it.
Students can "waste" those first weeks just getting to know each other and working out a common schedule. Natural roles and responsibilities can evolve. Real group learning experiences arise, as well as learning about how to work effectively in a team. Often, some pretty amazing work products come out of such projects. Best of all, real friendships can develop.
In closing, I'd like to say thanks (once again) to Professor David Socha for the email that prompted this blog entry.
A Grateful Graduate
On Monday night I walked out of my very last final at the University of Washington, Bothell. I paused for a few minutes just to stare down the stairwell, surprised by the magnitude of my feelings. A long three and a half years ago I had resumed my journey towards a college degree, after having taken a long hiatus from college. Hundreds of hours of classroom time, group work, reading, studying, coding, writing, and generally scrambling, and now suddenly it was all over. Assuming I hadn't unexpectedly flunked anything, I had earned my Bachelor's of Science in Computing and Software Systems.
Beyond the feelings of accomplishment, relief, and even a twinge of sadness to see the end of another phase of my life, my strongest feeling was gratitude. Make no mistake -- I had worked incredibly hard to earn this degree, while trying to still do good work at my job and be a good family man. But there are any number of ways in which this mission could have been far more difficult or even impossible. Without a supportive, flexible employer to make the afternoon school hours possible, state funding and federally backed loans to make the financial side workable, and a great school with a degree relevant to my career just ten minutes drive from my home, earning a degree would have been an unworkable puzzle. If my wife (who also worked full time and finished her degree during this period) was less patient, our family dysfunctional, our friends and neighbors less helpful, or our other life circumstances slightly different, this would have been a miserable endeavor.
It is only a wealth of good luck that made it possible to get to this point. I hope that I can take what I have earned and give back to those who helped me earn it.
In closing, I want to say that after the attention to last week's post, I had been trying to think of something interesting to post related to Cynefin. In fact, all week I have been adding to a list of possible post topics, related to Cynefin and otherwise. But with the holiday madness I haven't had time to put together anything too serious, and that will probably be true for another week or two. In the meantime, my gratitude is the most profound thing I could write about this week, even if slightly off topic. Happy Holidays!
Beyond the feelings of accomplishment, relief, and even a twinge of sadness to see the end of another phase of my life, my strongest feeling was gratitude. Make no mistake -- I had worked incredibly hard to earn this degree, while trying to still do good work at my job and be a good family man. But there are any number of ways in which this mission could have been far more difficult or even impossible. Without a supportive, flexible employer to make the afternoon school hours possible, state funding and federally backed loans to make the financial side workable, and a great school with a degree relevant to my career just ten minutes drive from my home, earning a degree would have been an unworkable puzzle. If my wife (who also worked full time and finished her degree during this period) was less patient, our family dysfunctional, our friends and neighbors less helpful, or our other life circumstances slightly different, this would have been a miserable endeavor.
It is only a wealth of good luck that made it possible to get to this point. I hope that I can take what I have earned and give back to those who helped me earn it.
In closing, I want to say that after the attention to last week's post, I had been trying to think of something interesting to post related to Cynefin. In fact, all week I have been adding to a list of possible post topics, related to Cynefin and otherwise. But with the holiday madness I haven't had time to put together anything too serious, and that will probably be true for another week or two. In the meantime, my gratitude is the most profound thing I could write about this week, even if slightly off topic. Happy Holidays!
Software Testing and Cynefin
Update: Please change your bookmarks and RSS feeds to point at joewlarson.com. Thanks!
Cyne-what? That's exactly what we said in Software Testing class when our Professor, David Socha, gave us a brief overview of Cynefin. In fact, Mr. Socha said he wasn't sure how to relate it to testing at all. However, in the couple of minutes he spent on it, my mind was opened to a new way of understanding things.
Cynefin (pronounced "Kan-a-vin") is a Welsh word meaning "place". It is "a model used to describe problems, situations and systems" (wikipedia) created by Dave Snowden. In Cynefin, a particular system is in one of four main domains: Simple, Complicated, Complex, or Chaotic.
A simple system, like a bicycle, is easy to understand and can be taken apart and put back together without changing its nature. Cause and effect relate directly. With a simple system, there are best practices -- the one best way to fix a particular problem. To solve problems, you Sense, Categorize, and then Respond. The categorization helps you identify which best practice you need to use.
A complicated system, like a train, is just an amped up simple system with more parts. The relationship between cause and effect involves more parts and steps. The approach in this system is to Sense, Analyze, and then Respond. The analysis helps you identify a good practice -- but there may be no best practice for any given problem.
A complex system, like a metropolis or a living thing, is not as straightforward. Parts interact in subtle and evolving ways. Cause and effect are intertwined here, and it is often difficult to understand what is happening until after it has occurred. Changing some part of the system cannot be fully reversed, since the whole system will have adapted along the way. In this system you must Probe, Sense, and then Respond. Through this process you reach an emergent practice, which continues changing as you learn and the system changes.
A chaotic system is one in which "there is no relationship between cause and effect". In such systems you can only arrive at a novel practice, which you reach by Acting, Sensing and Responding, and which probably won't work the same way again. I struggled to think of an example here. I thought of a war or a natural disaster but there is definitely a kind of logic in those systems. Maybe a bar fight (sadly I have no direct experience there)? Then I realized a good example might be the Great Depression. I am reading about the presidency of FDR in Jonathon Alter's "The Defining Moment: FDR's Hundred Days and the Triumph of Hope". FDR's approach to fixing the economy was essentially "do something even if it's wrong". In a chaotic system, you'll be better off acting and then seeing where that takes you than waiting around to understand what's going on.
There is a fifth domain, "Disorder", which is more of a state of not knowing what type of system you are in than it is a specific system state.
As I've thought about these categorizations, I've been amazed at how applicable they really are. For example, in two of our software projects at work we've been dealing with pattern recognition problems. In both cases, our software is processing human-generated, messy data and trying to interpret it in a consistent way. We began in each case by trying to deal with the data-sets assuming they were simple systems. Over time our approach to them became more complicated until finally we were trying to deal with them in their actual domain -- complexity.
However, the results of our efforts to deal with the data-sets in all their complexity was inconsistent and unpredictable results. You could never be sure what would "come out the other end" or why you were seeing a particular result. If we had been dealing with another complex system, like a disease or an economy, no one would expect consistent, completely understandable results. But people expect computers to make sense, so the results were not acceptable.
In the one project, we solved the problem by inventing a way to carve up the data set into separate, more internally consistent groupings, which we could then deal with as complicated systems. In the other project, the customer involved decided they were better off investing the time to clean up their data and that we should simplify our logic so it was merely complicated instead of complex. I've discussed this model of these projects with the developers involved and it definitely provides some fresh insights into what we've done and even how we can improve it.
So what does it have to do with testing?
First of all, I think we can subdivide various types of testing according to which sorts of domain it deals with.
Obviously this list is not comprehensive and I am sure there are any number of other ways to place specific items. However, it is probably helpful classify the software system you are trying to test and then determine what type of testing you ought to be doing. When testing your own well defined methods and APIs, then unit testing is certainly sufficient, because your own code is a simple system. If you have an entire stack of different software layers with pieces living on different systems and interacting in myriad ways, then unit testing is very insufficient. In such a complex system you need different testing techniques. This might seem obvious to a veteran tester, but possibly not to someone new to the subject.
Second, it seems like new testing techniques could be derived from other fields by understanding Cynefin. For example, in the field of medicine various forms of diagnostic tests are used to identify disease which occurs in the complex system of the human body. Can these be used as a model for complex software testing methodologies? Economics and politics work on complex systems and have testing in the form of economic indicators, polls, and so on. Is there a way to duplicate these testing approaches, where successful, into software testing?
Finally, just being able to understand and explain things in terms of Cynefin could aid a tester. How many times have unrealistic expectations been placed on a test team because management thought the software was like a bicycle (simple) when in fact it was like a metropolis (complex)? How often do we developers assume testing will be easy because we fail to understand how all our simple features are used in concert by real users? Conversely, how often are testers overwhelmed because they cannot figure out how to test something, when they could focus on carving the software into complicated or simple pieces which are more testable?
I think Cynefin is incredibly relevant for the field of software testing as well as software development in general. I hope to be able to learn more about it in the near future.
For further info, take a look at Cognitive Edge, A simple explanation of the Cynefin Framework and The Cynefin Framework.
Update It is exciting to see this post get some attention! Dave Snowden himself posted about this on his own blog, as did my professor David Socha on his blog. Great!
Cyne-what? That's exactly what we said in Software Testing class when our Professor, David Socha, gave us a brief overview of Cynefin. In fact, Mr. Socha said he wasn't sure how to relate it to testing at all. However, in the couple of minutes he spent on it, my mind was opened to a new way of understanding things.
Cynefin (pronounced "Kan-a-vin") is a Welsh word meaning "place". It is "a model used to describe problems, situations and systems" (wikipedia) created by Dave Snowden. In Cynefin, a particular system is in one of four main domains: Simple, Complicated, Complex, or Chaotic.
A simple system, like a bicycle, is easy to understand and can be taken apart and put back together without changing its nature. Cause and effect relate directly. With a simple system, there are best practices -- the one best way to fix a particular problem. To solve problems, you Sense, Categorize, and then Respond. The categorization helps you identify which best practice you need to use.
A complicated system, like a train, is just an amped up simple system with more parts. The relationship between cause and effect involves more parts and steps. The approach in this system is to Sense, Analyze, and then Respond. The analysis helps you identify a good practice -- but there may be no best practice for any given problem.
A complex system, like a metropolis or a living thing, is not as straightforward. Parts interact in subtle and evolving ways. Cause and effect are intertwined here, and it is often difficult to understand what is happening until after it has occurred. Changing some part of the system cannot be fully reversed, since the whole system will have adapted along the way. In this system you must Probe, Sense, and then Respond. Through this process you reach an emergent practice, which continues changing as you learn and the system changes.
A chaotic system is one in which "there is no relationship between cause and effect". In such systems you can only arrive at a novel practice, which you reach by Acting, Sensing and Responding, and which probably won't work the same way again. I struggled to think of an example here. I thought of a war or a natural disaster but there is definitely a kind of logic in those systems. Maybe a bar fight (sadly I have no direct experience there)? Then I realized a good example might be the Great Depression. I am reading about the presidency of FDR in Jonathon Alter's "The Defining Moment: FDR's Hundred Days and the Triumph of Hope". FDR's approach to fixing the economy was essentially "do something even if it's wrong". In a chaotic system, you'll be better off acting and then seeing where that takes you than waiting around to understand what's going on.
There is a fifth domain, "Disorder", which is more of a state of not knowing what type of system you are in than it is a specific system state.
As I've thought about these categorizations, I've been amazed at how applicable they really are. For example, in two of our software projects at work we've been dealing with pattern recognition problems. In both cases, our software is processing human-generated, messy data and trying to interpret it in a consistent way. We began in each case by trying to deal with the data-sets assuming they were simple systems. Over time our approach to them became more complicated until finally we were trying to deal with them in their actual domain -- complexity.
However, the results of our efforts to deal with the data-sets in all their complexity was inconsistent and unpredictable results. You could never be sure what would "come out the other end" or why you were seeing a particular result. If we had been dealing with another complex system, like a disease or an economy, no one would expect consistent, completely understandable results. But people expect computers to make sense, so the results were not acceptable.
In the one project, we solved the problem by inventing a way to carve up the data set into separate, more internally consistent groupings, which we could then deal with as complicated systems. In the other project, the customer involved decided they were better off investing the time to clean up their data and that we should simplify our logic so it was merely complicated instead of complex. I've discussed this model of these projects with the developers involved and it definitely provides some fresh insights into what we've done and even how we can improve it.
So what does it have to do with testing?
First of all, I think we can subdivide various types of testing according to which sorts of domain it deals with.
- Simple
- Record/Playback testing
- Smoke testing
- Unit testing
- Record/Playback testing
- Complicated
- Automated Testing
- Black box testing
- Compatibility testing
- Conformance testing
- Integration testing
- Model based testing
- Regression testing
- Automated Testing
- Complex
- Beta testing
- Performance testing
- Scalability testing
- Security testing
- System testing
- Usability testing
- White box testing
- Beta testing
- Chaotic
- (thankfully, nothing really fits here, hopefully because we try not to let software get to the point of chaos, though possibly because once software is in this space it is untestable)
- (thankfully, nothing really fits here, hopefully because we try not to let software get to the point of chaos, though possibly because once software is in this space it is untestable)
Obviously this list is not comprehensive and I am sure there are any number of other ways to place specific items. However, it is probably helpful classify the software system you are trying to test and then determine what type of testing you ought to be doing. When testing your own well defined methods and APIs, then unit testing is certainly sufficient, because your own code is a simple system. If you have an entire stack of different software layers with pieces living on different systems and interacting in myriad ways, then unit testing is very insufficient. In such a complex system you need different testing techniques. This might seem obvious to a veteran tester, but possibly not to someone new to the subject.
Second, it seems like new testing techniques could be derived from other fields by understanding Cynefin. For example, in the field of medicine various forms of diagnostic tests are used to identify disease which occurs in the complex system of the human body. Can these be used as a model for complex software testing methodologies? Economics and politics work on complex systems and have testing in the form of economic indicators, polls, and so on. Is there a way to duplicate these testing approaches, where successful, into software testing?
Finally, just being able to understand and explain things in terms of Cynefin could aid a tester. How many times have unrealistic expectations been placed on a test team because management thought the software was like a bicycle (simple) when in fact it was like a metropolis (complex)? How often do we developers assume testing will be easy because we fail to understand how all our simple features are used in concert by real users? Conversely, how often are testers overwhelmed because they cannot figure out how to test something, when they could focus on carving the software into complicated or simple pieces which are more testable?
I think Cynefin is incredibly relevant for the field of software testing as well as software development in general. I hope to be able to learn more about it in the near future.
For further info, take a look at Cognitive Edge, A simple explanation of the Cynefin Framework and The Cynefin Framework.
Update It is exciting to see this post get some attention! Dave Snowden himself posted about this on his own blog, as did my professor David Socha on his blog. Great!
Tags:
Cynefin,
Portfolio,
University
What is the meaning of the "X" Icon?
It sometimes means "close the current view/document/window", and it sometimes means "delete this row/file/object".
These two meanings are obviously related -- they mean "get rid of this thing". But X to close is really just hiding the X'd item in the sense that you can usually bring back that same view, whereas X to delete is a permanent annihilation of that item.
This distinction seems very obvious to an experienced user, and can usually be inferred from both context and position within the user interface. However, lately I've observed some novice computer users who haven't internalized this distinction getting confused with this dual use of X.
To make matters worse, X is sometimes used to mean other things as well. I've seen it used to mean "cross-reference" in a complex industrial cataloging application. A more typical example is that in Chrome and Safari you will see an X next to the address bar when you are waiting for a webpage to load:
In this case X means "cancel", which meshes somewhat with modal dialogs that use X on a cancel button (interestingly, I could not find an example of this but I know I've seen it). But the same exact X shape is used in Chrome for closing the window, closing a tab, and cancelling search. I wonder how many people this has confused?
Since the days of the Xerox Star system, icons have been very useful convention in graphical user interfaces. They can serve several purposes:
The X icon succeeds on #1 and #2, skips #3, but can fail at #4 and #5 for less experienced users.
Suggested Fixes
The Microsoft Outlook ribbon bar shows an example of #1 and #2:
An example of #3 is Twitter's use of a trash can icon for delete:
Avoiding confusing re-use of the X icon for different meanings can help alleviate usability issues for new users.
Updated with a new title - Thursday, December 2, 2010.
These two meanings are obviously related -- they mean "get rid of this thing". But X to close is really just hiding the X'd item in the sense that you can usually bring back that same view, whereas X to delete is a permanent annihilation of that item.
This distinction seems very obvious to an experienced user, and can usually be inferred from both context and position within the user interface. However, lately I've observed some novice computer users who haven't internalized this distinction getting confused with this dual use of X.
To make matters worse, X is sometimes used to mean other things as well. I've seen it used to mean "cross-reference" in a complex industrial cataloging application. A more typical example is that in Chrome and Safari you will see an X next to the address bar when you are waiting for a webpage to load:
In this case X means "cancel", which meshes somewhat with modal dialogs that use X on a cancel button (interestingly, I could not find an example of this but I know I've seen it). But the same exact X shape is used in Chrome for closing the window, closing a tab, and cancelling search. I wonder how many people this has confused?
Since the days of the Xerox Star system, icons have been very useful convention in graphical user interfaces. They can serve several purposes:
- Adding some graphical interest to screens that might otherwise be a sea of text.
- In some circumstances, provide a small but semi-meaningful click-target without text.
- In other circumstances, enlarge the target area of a menu item or button.
- Provide an additional clue to new users as to what a particular button or menu item will do, theoretically more quickly and subconciously digested than text.
- Create a point of visual recall for experienced users to more quickly find the item they are looking for compared to just text.
The X icon succeeds on #1 and #2, skips #3, but can fail at #4 and #5 for less experienced users.
Suggested Fixes
- Some applications use a different type of X to mean delete, with a kind of a paint-stroke look to it.
- Always labeling the icon can help eliminate confusion.
- Using a different icon for delete can help.
The Microsoft Outlook ribbon bar shows an example of #1 and #2:
An example of #3 is Twitter's use of a trash can icon for delete:
Avoiding confusing re-use of the X icon for different meanings can help alleviate usability issues for new users.
Updated with a new title - Thursday, December 2, 2010.
Elf and the Stigma of Testing...
In my Software Testing class we've talked and read about how testers have some stigma attached in some circles, often (incorrectly) being perceived as being sub-par techs or programmers who couldn't hack it. Yesterday my daughters were watching the best Christmas movie ever made, Elf. I had to laugh at the part where when Buddy couldn't keep up with the Elves in toy-making, they sent him to where the "special elves" worked -- in toy testing. So the testing stigma has spread even to the north pole!
RockMelt Browser Early Access - Not Broked!
I'd read a bit about RockMelt in the past few weeks and registered with them to get early access. So I was excited to see the RockMelt download invitation pop up in my gmail last night:
The download and install were painless. My early impression is that it is exactly what the demo video would have you believe (see below). Since it is based on Chromium, almost everything about it is the same as Google Chrome, except for the RockMelt specific features - the Friend Edge, the "app" Edge, the sharing feature and the search preview functionality. These all work exactly as billed though I have three feature requests:
Anyway, I'll try using this for a while and see how it feels. I'll let you know if I find bugs - none so far!
Hello,
You recently requested early access to RockMelt. We just hooked you up!
The download and install were painless. My early impression is that it is exactly what the demo video would have you believe (see below). Since it is based on Chromium, almost everything about it is the same as Google Chrome, except for the RockMelt specific features - the Friend Edge, the "app" Edge, the sharing feature and the search preview functionality. These all work exactly as billed though I have three feature requests:
- Show me Facebook notifications on the Friend Edge (if there is a way to do this, I can't find it)
- Allow me to import my Google Reader feeds into the App Edge.
- Please find a better name before you launch big! Saying "RockMelt" outloud feels I have marbles in my mouth, and people look at me cross-eyed when I mention it.
Anyway, I'll try using this for a while and see how it feels. I'll let you know if I find bugs - none so far!
Subscribe to:
Posts (Atom)